title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Towards Calibrated Robust Fine-Tuning of Vision-Language Models | Accept (poster) | Summary: The paper proposes a novel framework for robust fine-tuning for CLIP. To enhance out-of-distribution accuracy and calibration, the author incorporates a singular-based constraint term, self-distillation, and EMA. Extensive experiments on synthesized data and ImageNet demonstrate that CaRot can achieve better OOD performance and reliable predictions.
Strengths: 1. This paper investigates an under-explored but important problem, the calibration of CLIP after fine-tuning.
2. The theoretical analysis between the smallest singular value of image representation and OOD robustness is good.
Weaknesses: 1. The overall framework is not well-motivated. Why do we need to incorporate self-distillation and exponential moving average (EMA)? These two techniques are not the main contributions of this paper and are not directly relevant to the primary theoretical analysis of singular value constraints.
2. The connection to Vision-Language Models is weak. The paper does not justify why the proposed soft constraint term is specifically tailored to Vision-Language Models like CLIP. In other words, does this regularization only apply to visual language models?
3. The result analysis is insufficient. The paper does not report the average confidence level, which may lead to the confusion that the calibration improvement may only come from the accuracy improvement.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can the proposed singular value-based constraint improve other fine-tuning methods, such as FLYP or WISE-FT?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **R4-1**
> The overall framework is not well-motivated. Why do we need to incorporate self-distillation and exponential moving average (EMA)? These two techniques are not the main contributions of this paper and are not directly relevant to the primary theoretical analysis of singular value constraints.
Thanks for your constructive question! As your point, we do not claim those components are our contribution. First, we conduct a novel theoretical analysis on OOD generalization and calibration errors, and the result of these analyses motivates us to pursue better ID calibration as well as increasing the smallest singular value of input covariance matric. Then, we simply employ the self-distillation (SD) with exponential moving average (EMA) as an option among lots of training-time calibration approaches. Our method allows to adopt other alternative calibration approachs, and we provided results on label smoothing (LS) which is a representative train-time calibration regularization in Appendix B (Table D). We further provide results on vanilla knowledge distillation (KD) from zero-shot CLIP as well as varying magnitude of label smoothing for this rebuttal (please refer Fig 3. in the attached PDF). As we see, other type of calibration methods also induce better performance compared with a competative baseline, but the EMA-SD induces best performance. In terms of calibration, EMA-SD can be interpreted as an input dependent label smoothing [14] which adaptively adjusts the smoothed pseudo target label depend on input whereas the label smoothing provides unadaptable fixed pseudo target.
Note that the combination of SD and EMA is also not our originality, and this combination has been widely adopted and validated in the context of vision-language model training [18] as well as self-supervised learning [15,16,17].
**R4-2**
> The connection to Vision-Language Models is weak. The paper does not justify why the proposed soft constraint term is specifically tailored to Vision-Language Models like CLIP. In other words, does this regularization only apply to visual language models?
We appreciate the reviewers' keen interest on broader impact of this work. Under the main challenge of robust fine-tuning that is to fine-tune foundation models without loosing its already possessing generalizability to unseen domains, we chose CLIP as our primary focus by following previous works [6,7,8,9]. Since CLIP is widely used for diverse applications not only as a standalone model but as a core component of open-sourced multimodal large language models or text-to-image generative models, we believe that validation on CLIP fine-tuning is crucial in terms of its downstream impact.
However, as the reviwer g5ML and yVZD pointed out, our theoretical results are not confined to CLIP-like VLM, and verifying the applicability of our method to other kind of models will further broaden the impact of our method. Therefore, we expanded our experiment scope to vision-only models, ViT-Base pre-trained by DINOv2 objective [10], using DomainBed benchmark [11], and confirmed the effectives of our proposed framework. Details are elaborated in the global response.
As shown in Table 3 of the PDF file, CaRot shows performance gain on 7 out of 8 cases in terms of Accuracy and ECE on two datsets across two model selection criteria, and achieve relatively smaller performance deviation
**R4-3**
> The result analysis is insufficient. The paper does not report the average confidence level, which may lead to the confusion that the calibration improvement may only come from the accuracy improvement.
To provide a clearer performance improvement, we repeated the experiments three times with different seeds for each method and confirmed that the performance improvement exceeds the error bars. The results are provided in Figure 4 of the attached PDF.
**R4-4**
> Can the proposed singular value-based constraint improve other fine-tuning methods, such as FLYP or WISE-FT?
In Table 3 of our manuscript, we provided the ablation study to validate the effectiveness of our singular value constraint term by plug that constraint to vanilla fine-tuning and FLYP where we can see consistent improvement in terms of OOD performance which aligned with our theoretical analysis. Furthermore, to address the reviewer's concern extensively, we additionally provide the results of the combination of WiSE-FT and our orthogonal constraint term in Table 2 of the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer yVZD,
We appreciate reviewer yVZD's constructive feedback, which helped us improve our draft further.
We have submitted our responses to concerns raised by reviewer yVZD, and we are eager to know if these replies address your concerns!
Any further comments or questions are welcome to us.
Thank you
---
Rebuttal 2:
Comment: Thanks for the response, and some of my concerns have been addressed.
However, I still have the following two concerns:
1. The main issue in this paper is the calibration for CLIP. However, the connection between the proposed method and vision-language model (CLIP) is still weak. I think that the proposed method can be applied to most classification models and is not limited to CLIP.
2. The calibration improvement brought by CaRot seems to come only from accuracy improvement. It does not address the overconfidence issue of CLIP caused by fine-tuning. For example, in Table 2 of the attachment, compared to the baseline, the accuracy of FT increased by 4%, but the ECE only decreased by 3%. Hence, It appears that CaRot is merely a method to improve accuracy rather than a confidence calibration method.
I will keep my original rating.
Best regards,
yVZD
---
Rebuttal 3:
Comment: Dear reviewer yVZD,
We are delighted that some of your concerns have been addressed.
For the remaining concerns,
* [first concern] As you have pointed out from the original review, we have derived our theory in a common classification setting which is not confined to the vision-language model for the generality of the theoretical statement, and we observe that our OOD error bound-based regularization method is also effective on a vision-only model during the rebuttal.
* About the connection between VLM and the proposed method, as described in Section 4.1. of the draft, our orthogonality constraint on the visual projection layer is seamlessly combined with a multimodal contrastive loss so that enables it to be interpreted as a constrained singular value decomposition (SVD) on the cross-covariance matrix of image-text representation pairs.
* We appreciate you found that our theory and corresponding method have broad potential applications, and respect your worry. we will reflect your comment to refine the presentation of our revised manuscript. Thanks again.
* [second concern] We would like to respectfully refute your claim: is CaRot's improved confidence calibration merely due to accuracy improvement?
* Allow us to clarify that Table 2 of the attachment is the ablation study of the orthogonality constraint (OC) on FLYP and WiSE-FT, which shows the consistent effect of OOD improvement of OC. **The result of CaRot is not included in that table**. We attached the Table with the results of CaRot below, and demonstrate that **CaRot improves OOD accuracy and ECE as 5.05 (8.7%) and 0.1395 (63.8%). Thus the improvement in terms of ECE is far more significant than the improvement in accuracy.**
| Method | WiSE-FT | ID Acc | ID ECE | OOD Acc | OOD ECE |
|------------|---------|--------|--------|---------|---------|
| FT | X | 81.53 | 0.0884 | 57.50 | 0.2186 |
| FT w/ OC | X | 81.45 | 0.0826 | 59.10 | 0.2051 |
| FLYP | X | 82.69 | 0.0635 | 59.46 | 0.1831 |
| FLYP w/ OC | X | 82.51 | 0.0651 | 59.51 | 0.1803 |
| FT | O | 82.16 | 0.0820 | 61.22 | 0.1920 |
| FT w/ OC | O | 82.03 | 0.0770 | 61.97 | 0.1829 |
| FLYP | O | 82.98 | 0.0798 | 61.27 | 0.1788 |
| FLYP w/ OC | O | 82.80 | 0.0627 | 61.41 | 0.1682 |
| **CaRot** | X | 83.13 | 0.0470 | 62.55 | 0.0791 |
* [second concern] Meanwhile, we partly agree with your statement that improvement in accuracy could contribute to calibration (or vice versa) somewhat. **However, even though the accuracy and calibration are correlated sometimes, they are not a causality.** For example, Figure 2 of Guo et al. 2017 shows that improved accuracy hurts calibration, and Table 1 of Levine et al. presents that CLIP ViT-H-14 and CLIP ViT-B-16 have worse calibration than CLIP ViT-L-14 and CLIP ViT-B-32, respectively, even though they achieve far better classification accuracy. Figure 9 of Dehghani et al. also implies that increased accuracy does not result in improvement in calibration.
* We observe the same evidence in Figure 6 of the attached PDF file under our experimental setup. For instance, from the left-most point of FT to the left-most point of FLYP, Accuracy is roughly improved from 81.0 to 82.0 (+1.2%), but the ECE is increased (`worsen`) from 0.06 to 0.08 (+33.3%). Meanwhile, CaRot achieves Accuracy and ECE of 83.0 (+2.4%) and 0.05 (-20%; `become better`). This indicates that **CaRot is not just a method for accuracy improvement, but it is a method for accuracy and calibration simultaneously in a single theory-motivated framework.**
### Reference
1. Guo et al. 2017, On Calibration of Modern Neural Networks
2. Levine et al. 2023, ENABLING CALIBRATION IN THE ZERO-SHOT INFER- ENCE OF LARGE VISION-LANGUAGE MODELS
3. Dehghani et al. 2023, Scaling Vision Transformers to 22 Billion Parameters
We thank you again for the reviewer yVZD's intensive commitment to reviewing our paper and we appreciate valuable comments that contribute to improving the quality of our draft.
Best regard
---
Rebuttal 4:
Title: Sincerely looking forward to your feedbacks
Comment: Dear Reviewer yVZD,
We would like to express our huge gratitude for your invaluable comments so far.
Your comment has definitely improved the quality of our manuscript and led us to refine our statement further.
For now, we are wondering if our responses address your remaining concerns. Could you check our responses by any chance?
* We clarify that our theory is not confined to VLM for its generality and broader impact, but our method—integration of multimodal contrastive learning with orthogonality constraint—enables us to interpret our method as a constrained singular value decomposition on the cross-covariance matrix of image-text representation pairs (Please refer to Section 4.1 of our manuscript) during fine-tuning of VLM.
* Regarding your second concern, we provide counter-examples showing that improved accuracy does not translate to better calibration (with some references) and clarify the experiment results to fix misunderstanding, thereby demonstrating that our CaRot is not just an accuracy-improving method but also promotes non-trivial confidence calibration.
If these replies address your concerns, could we politely ask you to reconsider your rating on our paper?
Sincerely,
The Authors | Summary: his paper aims to improve the accuracy and reduce the calibration error on OOD data for fine-tuning VLM models. The authors first demonstrate that the OOD calibration error and the OOD classification error can be bounded by the ID calibration error and the smallest singular value of the ID input covariance matrix. To address this, the authors apply orthogonal regularization to increase the smallest singular value and use self-distillation to improve the ID calibration. Several experiments on distribution-shifted datasets validate the effectiveness of the proposed method.
Strengths: The paper is well-organized and clearly presented. The theoretical finding that OOD accuracy and calibration error can be bounded by ID error and the smallest singular value is both novel and insightful. The proposed fine-tuning method is simple and intuitive. Numerical analysis of the error bounds and empirical validation on ImageNet OOD benchmarks make the proposed method convincing.
Weaknesses: (1) Some of the notations are misleading. In line 86, the subscript of $x$ denotes the $n$-th sample in $\mathcal{D} $. However, in line 113, the subscript of $x$ denotes the value of the $i$-th dimension of $x$.
(2) Some of the assumptions are not realistic. In line 113, the authors suppose the hypothesis $h_i$ is a one-dimensional function, each applying to one dimension of the input, which does not hold most of the time. Besides, the authors assume each dimension pair is Gaussian. I am unsure whether the conclusion of Theorem 3.1 still holds if these two assumptions are violated.
(3) The orthogonal constraint is too strong. The finding of Theorem 3.1 states that lifting the smallest singular value is enough to reduce calibration and classification errors. However, the authors implement an orthogonal constraint in the learning procedure, enforcing the features to be an orthogonal basis, which leads to a smaller largest singular value. In Table 3, we can see that applying such regularization alone reduces ID accuracy. I wonder whether applying a regularization that only increases the smallest singular value could achieve higher performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been discussed and no negative societal impact has been identified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **R3-1**
Thank you for pointing out the misleading notations. We will revise the notation in the theorem (line 113) as $x[i], \; i=1,...,d$.
**R3-2**
We appreciate reviewer's interest and details look on our theorem! As we noted at line 112 of our manuscript, we set the input $x$ as a representation vector produced by the projection layer of CLIP image encoder without lose of generality. Then, our hypothesis function $h(\cdot)$ be a single linear layer classification head that maps the representation vector $x$ to the one-dimensional logit range over $[0,1]$. Here, if we drop the bias term at this classification head, the weight vector $[a_{1},...,a_{d}]$ of $h(\cdot)$ conducts linear combination over $x$, i.e., $h(x)=\sum_{i=1}^{d} h_{i}(x_{i})=a_{1}x_{1}+...+a_{d}x_{d}$, and our first assumption always holds.
About the second assumption of pairwise Gaussian property, as the reviewer a8to's concerns, the representation of modern neural networks are not necessary joint Gaussian for every $(x_{i},x_{j})$. However, our empirical validation with 111 shallow and narrow multi-layer perceptron (MLP) networks in Section 5.1. (See Fig 3.) presents strong evidence that support the validaity of our bounds, and the results of experiments on ImageNet with more wider and deeper network, i.e., CLIP ViT-B, are also well-aligned with our theory.
Moreover, there is a bunch of works revealing that the outputs of infinitly wide neural networks (whether they are MLP [19], convolutional neural network [20], Transformer [21]) are Gaussian processes (GPs) and GPs ensure that every finite collection of their elements are jointly Gaussian. Therefore, our second assumption becomes valid as the network's layer-wise width being increased. While our numerical analyses are limited to ViT-L scale models, we believe that current large-scale modeling regime spurs us to explore larger width [22] where our second assumption becomes more valid [19]. Meanwhile, we recognized some theoretical results on the Gaussianity of neural network representations under finite-width regime [23], and we plan to explore the relaxation of our assumption based on those insights. Thanks again for the constructive criticism.
**R3-3**
As the reviewer pointed out, there are other possible design choices for the smallest singular value regularization. One straightforward way is to directly regularize only the smallest singular value. In Table 1 of attached PDF, we provide the result of direct singular value regularization method. This approaches achieve smaller ID-OOD performance gap compared with our current method, which is aligned with our theoretical results (see appendix) and that of [5]. However, direct singular value regularization method entails singular value decomposition (SVD) on the covariance matrix size of $\mathbb{R}^{d \times d}$, which require cubic time complexity over the feature dimension which is significantly increase computational cost especially on the large-scale modeling regime. To this end, we employ the orthogonality regularization on the final projection matrix $W$ of visual encoder which indirectly (yet effectively) increase the smallest singular value of input representation and its covariance matrix.
To be specific, enforcing the projection matrix to be orthogonal matrix $O$ ensures that the rank of input representation and its covariance matrix produced by our method is closer to the upper bound rank than an y other possible projection matrices $W$ without the constraint as below,
\begin{equation}
\begin{split}
\text{rank}(\tilde{Z}^{T}\cdot\tilde{Z}) &= \text{rank}(\tilde{Z}) \\\\
&= \text{rank}(W\cdot Z^{T}) \\\\
&\le \text{rank}(O\cdot Z^{T}) \\\\
&= \text{rank}(\hat{Z}) \\\\
&= \text{rank}(\hat{Z}^{T}\cdot\hat{Z})
\end{split}
\end{equation}
where $Z$ is input feature before projection, and $\tilde{Z}$ and $\hat{Z}$ denote input representations obtained by projection matrix $W$ and $O$, respectively. However, the SVD has been continuously studied to improve computational efficiency due to its popularity, and we found some approximation-based fast SVD methods such as `scipy.sparse.linalg.svds` though it is non-differentiable. Devising the fast differentiable SVD-based new fine-tuning method, which is well-aligned with our theory, would be very exciting future work direction and we appreciate the reviewer a8to's valuable query.
Meanwhile, the reduced ID accuracy by applying orthogonal constraint can be interpreted as compensation for improved OOD generalizability, which is the same in the case of direct SVD-based regularization. Intuitively, this phenomenon indicates that a large smallest singular value encourages the model to capture more diverse features while compromising ID-specific discriminative biases somewhat. We agree that our ultimate goal should be achieve good OOD generalization without compromising ID adaptation capability. We leave devising a method that produces better ID-OOD trade-off as our future work.
---
Rebuttal Comment 1.1:
Title: Reminder for discussion, Reviewer a8to
Comment: Dear Reviewer a8to,
We appreciate reviewer a8to's valuable comments that significantly contribute to improving our manuscript.
We have submitted our responses to reviewer a8to's concerns, and we want to know if these replies address your concerns!
Any further comments or questions are welcome to us.
Thank you
---
Rebuttal 2:
Comment: Dear reviewer a8to,
1. We truly appreciate your suggestion on fixing notation, clarifying assumptions of the theory, and discussing potential alternative design choices of constraint terms. We promise to update our manuscript thoroughly, as you pointed out, and this will significantly refine the presentation quality from now on.
2. We also appreciate your further interest in the behavior of orthogonality constraint (OC) and the consideration of raising the score!
Allow us to clarify the experimental setup for Table 3 and Table 4 of our manuscript. The left side of **Table 4 is the results of the ablation study where the self-distillation (SD) regularization term is applied with a 1.5 multiplier**. That is, the first and third rows of Table 4 are equal to the seventh and eighth rows of Table 4. For a clear comparison, we insert a table below that includes results with and without SD under varying OC magnitudes. Meanwhile, As you can see **by comparing the first and second, and the fifth and sixth rows of Table 3, orthogonality constraint (`without SD term`) slightly tradeoffs ID accuracy**. This phenomenon is also observed through additional ablation studies with the WiSE-FT method in Table 3 of the rebuttal attachment PDF.
| Objective | OC | ID Acc | ID ECE | OOD Acc | OOD ECE |
|-----------|-----|--------|--------|---------|---------|
| MCL | 0 | 82.69 | 0.0635 | 59.40 | 0.1836 |
| MCL | 0.1 | 82.48 | 0.0652 | 59.41 | 0.1807 |
| MCL | 0.2 | 82.51 | 0.0651 | 59.51 | 0.1803 |
| MCL w/ SD | 0 | 83.03 | 0.0523 | 62.28 | 0.0772 |
| MCL w/ SD | 0.1 | 83.18 | 0.0511 | 62.42 | 0.0779 |
| MCL w/ SD | 0.2 | 83.13 | 0.0470 | 62.55 | 0.0791 |
**Therefore, we derive non-contradictive conclusions from Tables 3 and 4 of the manuscript, indicating that orthogonality constraint alone somewhat compromises ID adaptation capability for OOD generalization, while this tradeoff is mitigated when the SD is applied together, which is our final learning objective.**
We speculate that,
1) When the orthogonality constraint is applied alone, the model is enforced to capture diverse features for OOD generalization yet without any restriction on the type and priority of learned features. While this contributes to enhancing OOD generalization, **diverse features without prioritization might compromise strong ID performance.**
2) However, the SD regularization produces input-dependent soft labels that hold similarity structures between classes. This allows the model to learn diverse features while **putting a higher priority on features shared across similar classes** (judged by the EMA teacher model) so that the features are beneficial not only for OOD generalization but also for ID adaptation. We can understand the joint use of the orthogonality constraint and self-distillation regularization induces a narrower solution set, which potentially induces better generalization (Huh et al. 2024).
[Huh et al. 2024] The Platonic Representation Hypothesis
Thanks again for your constructive comment so far.
Best regard
---
Rebuttal Comment 2.1:
Comment: The authors have addressed my concerns effectively, and I have increased my score accordingly. | Summary: VLMs have shown to be effective in a wide-area of applications, though they can fail under certain domain shifts. In this work, the authors first observe that the the upper bound of generalization and calibration under domain shifts is bounded by the ID calibration error and the smallest singular value of the ID covariance matrix. Building upon this intuition, the authors then propose a novel fine-tuning scheme for VLMs in the shape of a contrastive objective and a self-distillation technique. Experimental evaluation on Imagenet and quite a few of its variants are presented to show the effectiveness of the proposed approach.
Strengths: - The motivation for the work is neatly presented with a clear link between the motivation and the proposed method. While the exact method does not depend on Theorem 3, it is still a good way to explain why the proposed method can serve as a proxy to reduce OoD unreliability of the models. Sections 3 and 5.1 are really helpful in this regard.
- Throughout the work, the design choices are explained clearly in a technically sound manner with intuitive connections being made between the utilized concepts and the math behind.
- The experimental benchmarks chosen for evaluation seems to be adequate as it involves a rather wide range of Imagenet variants, from those that are much more similar to Imagenet like ImagenetV2 to Imagenet-S. From the results of these experiments, it can be seen that the proposed method mostly brings significant improvements for the domain-shift settings.
Weaknesses: - In certain cases, such as Imagenet-R on Table 2 and 3, it appears that the zero-shot CLIP is better than _any_ of the fine-tuning methods including the proposed approach in terms of _both_ the accuracy and calibration. Furthermore, the calibration after fine-tuning with CaRoT is worse than the zero-shot CLIP under harsher domain-shift benchmarks, namely the ObjectNet, Imagenet-{A, S, R}. I wonder how would the Table 2 look like had the model had access to multiple training environments (e.g maybe having a subset of Imagenet-S during fine-tuning, then evaluated on Imagenet-R) as it is often the case for domain generalization benchmarks [A].
- On Tables 2 and 3, the proposed approach seems to be falling behind of the other methods in terms of Imagenet accuracy, which may limit its usefulness for a wider range of applications.
- It would have been good to have Imagenet-C [B] here as well, as it is perhaps the most commonly used among these variants for domain shift benchmarking. In particular, having some analyses based different types of corruptions and under different severity levels could have provided more insights into the limitations and strengths of the proposed approach.
- One of the minor issues I can see with the work is regarding the usage of the term "OOD", especially since the benchmarks used vary significantly from Imagenet-V2 (which was designed to be distributionally as similar as possible to the original Imagenet) to Imagenet-S. While I acknowledge that the authors clarify what they mean under Section 2, I still encourage them to check out [C] for a nice template on works involving the term.
[A] Guljarani et al., "In search of lost domain generalization", ICLR 2021
[B] Hendrycks et al., “ Benchmarking Neural Network Robustness to Common Corruptions and Perturbations”, ICLR 2019
[C] Farquhar et al., "What ‘Out-of-distribution’ Is and Is Not", NeurIPS-W 2022
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder what the authors think about the first weakness I have described above and it would be great to see detailed Imagenet-C results.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are somewhat discussed in the conclusion. I appreciate the fact that they have stated that they could not include larger models due to computational constraints which prompted me to avoid asking for how the proposed method would perform with larger models trained with much larger data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **R2-1**
> (a) In certain cases, such as Imagenet-R on Table 2 and 3, it appears that the zero-shot CLIP is better than any of the fine-tuning methods including the proposed approach in terms of both the accuracy and calibration. Furthermore, the calibration after fine-tuning with CaRoT is worse than the zero-shot CLIP under harsher domain-shift benchmarks, namely the ObjectNet, Imagenet-{A, S, R}.
As the reviewer pointed out, zero-shot CLIP shows strong performance in both terms of accuracy and calibration error. The robust fine-tuning literature focused on preserving (or improving) the OOD accuracy after fine-tuning CLIP on ID data. In this work, we expanded the metrics of interest to OOD calibration error and observed that existing baselines sacrifice the confidence calibration as well as OOD generalization errors. As shown in Table 1 and 2 in the manuscript, compared with other fine-tuning methods, CaRot achieves the best accuracy in all OOD datasets and minimum calibration error in four out of five OOD datasets.
We agree that improving OOD generalization and calibration error beyond zero-shot CLIP should be the ultimate goal of robust fine-tuning research, and we leave this as our future work.
> (b) I wonder how would the Table 2 look like had the model had access to multiple training environments (e.g maybe having a subset of Imagenet-S during fine-tuning, then evaluated on Imagenet-R) as it is often the case for domain generalization benchmarks [A].
We appreciate the reviewer for suggesting such a meaningful evaluation setup to investigate the versatility of CaRot. As recommended, we conducted experiments on multi-source domain generalization with DomainBed [11]. We used PACS and VLCS datasets, where each dataset consists of four domains of the same class labels (i.e., covariate shift). Following the leave-one-out setting, we train the model on three domains and test on the unseen remaining domain (please refer to global response for details).
The results in Table 3 of the attached PDF show that CaRot on top of ERM++ achieves the best performance on three out of four cases in terms of Accuracy and all four cases in terms of ECE. Especially, on PACS dataset of training-domain validation model selection setup, CaRot improves the accuracy of ERM++ from 95.0 to 96.2 and ECE of ERM++ from 0.025 to 0.013 that are signifcant given that the absolute performance of ERM++ is already very competative. This indicates that CaRot can be effectively adopted in setups where multiple training domains are available, enhancing both the accuracy and calibration of the algorithm.
**R2-2**
We acknowledge the reviewer's concern regarding the marginal underperformance of CaRot on ID (ImageNet) data compared to Lipsum-FT. However, the performance gain on the OOD data (61.04 -> 62.55; **2.5%**), which is significantly larger than the loss of ID performance (83.30 -> 83.13; **0.2%**), implies better effective robustness of CaRot. This robustness is crucial for many safety-critical real-world applications, where the ability to generalize to unseen data is often more important than slight improvements on the training data.
**R2-3**
We appreciate the reviewer's suggestion, and we evaluated CaRot on ImageNet-C [13] consisting of 15 synthetic corruptions with five severities. We report the results averaged over severities in Figure 1 of the attached PDF. The results show that CaRot consistently outperforms other fine-tuning methods in terms of Accuracy and ECE across all corruption types. Specifically, for the coarser corruption such as Snow, Forst, Fog, Bightness and Contrast that are more natural type of shifts compared to others, CaRot greatly ourperforms baseline methods whereas on the finer corruption such as Elastic transform the performance gain by CaRot relatively diminishes. Due to the space constraints, we attached box plots only for brightness and elastic transform in Figure 2 of the PDF.
**R2-4**
We appreciate the reviewers suggestion on clarifying the terminology of "OOD". We will clarify the term in the manuscript regarding the recommended reference.
---
Rebuttal Comment 1.1:
Comment: I appreciate the hard work the authors have put into the rebuttal. They seem to have answered my points fairly well, especially with the detailed DomainBed and Imagenet-C experiments and visuals. I am increasing my score accordingly.
---
Rebuttal 2:
Comment: We are so delighted that our rebuttal addresses your concerns and questions!
Thank you for your valuable comments and for taking the time to review our paper thoroughly. | Summary: This research paper presents a novel fine-tuning approach for improving out-of-distribution (OOD) generalization and calibration in Vision Language Models (VLMs). By identifying a shared upper bound for OOD accuracy and calibration errors, the authors develop a constrained multimodal contrastive loss framework enhanced by self-distillation. Empirical validation and tests on ImageNet benchmarks demonstrate the method's effectiveness in enhancing both OOD accuracy and calibration error.
Strengths: - The proposed approach is well motivated by theory. The connection of out-of-distribution (OOD) generalization and calibration in terms of the smallest singular value of the input covariance matrix of in-distribution (ID) data is interesting and novel to my knowledge.
- The experiments are extensive and the results are good on both fronts.
Weaknesses: - The theory part is a bit obscure. Is there an intuitive interpretation of the smallest singular value of ID input covariance matrix in the studied context? How tight is the upper bounds (Theorem 3.1)?
- Were the main experiments repeated multiple times? If so, can the authors provide the error bars for the main experiments?
- The paper limits the scope to VLM fine-tuning, but does it also apply to fine-tuning other kinds of models?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed one limitation of the work. I don't notice any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## R1 (g5ML)
**R1-1**
> (a) The theory part is a bit obscure. Is there an intuitive interpretation of the smallest singular value of ID input covariance matrix in the studied context?
We derive the smallest singular value into the bounds for OOD errors through two steps: 1) We frist derive OOD calibration and generalization error bounds with ID calibration error and H-square disagreement between ID and OOD, 2) we then introduce the reciprocal of the smallest singular value of ID covariance matrix as an upper bound of H-square disagreement. By substituting H-square disagreement (which requires access to OOD data) with the smallest singular value term, we now obtain OOD error bounds that depend soley on ID data.
According to Theorem 3.1, the upper bound on the OOD classification and calibration errors decreases as the smallest singular value increases. Intuitively, enforcing larger smallest singular values can be interpreted as inducing a larger effective rank of the input representation and its covariance matrix. The effective rank measures how evenly the singular values are distributed, indicating that the model learns more diverse features [1] and is less likely to focus on ID-specific biased attributes, thus improving OOD generalizability. This concept of de-correlating the covariance of input representations to increase the effective rank is well-studied in the context of self-supervised learning, which aims to learn transferable representations [2,3].
Motivated by this theoretical insight, we developed a method that encourages a high effective rank of input representations and their covariance matrix by applying a soft constraint to ensure the last projection layer, $W$, functions as an orthogonal projection matrix, $O$. This ensures that the rank of ID input representation and its covariance matrix produced by our method is closer to the upper bound rank than any other possible projection matrices, $W$, without this constraint as below,
$$
\begin{split}
\text{rank}(\tilde{Z}^{T}\cdot\tilde{Z}) &= \text{rank}(\tilde{Z}) \\\\
&= \text{rank}(W\cdot Z^{T}) \\\\
&\le \text{rank}(O\cdot Z^{T}) \\\\
&= \text{rank}(\hat{Z}) \\\\
&= \text{rank}(\hat{Z}^{T}\cdot\hat{Z})
\end{split}
$$
where $Z$ is input feature before projection, and $\tilde{Z}$ and $\hat{Z}$ denote input representations obtained by projection matrix $W$ and $O$, respectively.
> (b) How tight is the upper bounds (Theorem 3.1)?
Our theoretical analysis is inspired by two previous works that provide OOD generalization error bound [4] and the domain descrepancy bound via the smallest singular value [5].
Intuitively, the inequality of our bounds (1) in the manuscript approaches to equality (becomes tight) under the following conditions:
* i) When the outputs of learned classifier are same with the outputs of ideal joint classifier that is trained to minimize the both ID and OOD errors [4].
* ii) Whether the outputs of our classifier or ideal joint classifier are perfectly calibrate [4].
* iii) The number of training samples $N$ approaches infinite [4].
* iv) If our classifier is a linear model, when the weight vector is same with the eigen vector corresponding to the smallest eigen value of input covariance matrix [5].
* v) When the OOD and ID calibration errors satisfy the relationship: $\varepsilon_{OOD}(h,h^{\*})=\varepsilon^{2}_{ID}(h,h^{\*})$ / $(\varepsilon _{ID}(h,h^{*}) - 1)$
While we leave the exact mathematical statement on the tightness of our bounds to future work, we demonstrate the validity of our bounds as shown in Section 5.1. This is further supported by promising results on real datasets presented in Section 5.2. Additionally, we provide empirical validation of the tightness of our OOD calibration error bound in Figure 5 of PDF.
We compute the right-hand side of our bound as $\varepsilon_{\text{ID}}(h)$+ $\\frac{1}{\\sigma_{min}(\tilde{\Sigma_{ID}})}$ +$\\Delta$ by neglecting the $\mathcal{O}(\cdot)$ term, assuming we have sufficiently large training samples, and we approximate the joint optimal risk $\Delta$ as a sum of ID and OOD errors from the models trained on ID and OOD train set, simultaneously. We see that the estimated upperbound is placed between the $\varepsilon_{{OOD}}(h) + 2\text{std}(\varepsilon_{OOD}(h))$ and $\varepsilon_{{OOD}}(h) + 3\text{std}(\varepsilon_{{OOD}}(h))$ region in average, which is not significantly deviated from true OOD error.
**R1-2**
> Were the main experiments repeated multiple times? If so, can the authors provide the error bars for the main experiments?
We repeated the experiments three times with different random seeds for each method and confirmed that the performance improvement exceeds the error bars. The results are provided in Figure 4 of the attached PDF. We will also update the tables in the manuscript accordingly.
**R1-3**
> The paper limits the scope to VLM fine-tuning, but does it also apply to fine-tuning other kinds of models?
Under the main challenge of fine-tuning foundation models without losing their inherent generalizability to unseen domains, we chose CLIP as our primary focus by following previous works [6,7,8,9].
However, as the reviwer g5ML pointed out, our theoretical results are not confined to vision-language models (VLMs). Verifying the applicability of our method to other kind of models will further broaden the impact of our method. Therefore, we expanded our experimental scope to vision-only models, specifically DINOv2 [10] pre-trained ViT-base, using DomainBed benchmark [11], and confirmed the effectives of our proposed framework. The results are provided in Table 3 of the attached PDF.
We see that while ERM++ method already achieve superior performance compared with other baselines, applying CaRot objective to ERM++ further improves its performance in terms of Accuracy and ECE, and shows the best results in two considered datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you for the further clarifications.
I get the idea that a higher effective rank of the feature covariance matrix implies higher feature diversity. However, I'm still confused about the role of minimizing $\lVert W_v W_v^T - I\rVert^2_F$ in achieving such a goal.
Suppose $W_v = I$, which is the best we can get. This means the projection layer preserves the effective rank $r$ of the feature from the previous layer, so we have $r' = r$ where $r'$ is the effective rank of the projected feature. The question is, does this have any effect on $r$ itself? To me, it seems the answer is no because the effective rank $r$ of the pre-projection feature may be low even if $\lVert W_v W_v^T - I\rVert^2_F = 0$, i.e., they are independent.
Empirically, the ablation study (Table 3) suggests that the proposed constraint, i.e., $\lVert W_v W_v^T - I\rVert^2_F = 0$, plays a small role. The main improvement is brought by FLYP and SD but they are not the main contribution of this work (as partly mentioned by Reviewer yVZD). Could the authors please further comment on this?
---
Rebuttal 2:
Comment: Thank you for your active query!
### 1. Rank comparison
> Allow us to recap the rank of matrix multiplication,
$\\text{rank}(AB) \\le \text{min}(\\text{rank}(A),\\text{rank}(B))$
where the equality only holds whether matrix $A$ or $B$ has full rank.
Then, for the neural network representation of our case, **the post-projection feature always has a smaller rank compared to the pre-projection feature if the projection matrix $W$ is not full rank**. There is a solid theoretical background about this so-called **rank diminishing phenomenon** [Feng et. al 2022].
As you said, our constraint can't increase the rank of the pre-projection feature and just preserve it in the ideal case; however, this preservation induces the higher rank of the post-projection feature compared with the baseline, which does not encourage the projection matrix to be full-rank.
Therefore, our method induces a higher effective rank of the feature covariance matrix compared to baseline methods, which do not enforce rank preservation, and our method induces learning of diverse features that contribute to better OOD generalization.
_We really appreciate this valuable query, and will revise our manuscript to clarify further the connection between singular values, rank, and OOD generalization._
### 2. Significance of performance gain
We would like to emphasize that, in Table 2 of the attached PDF file, our orthogonal constraint, OC for short, **consistently improves the OOD accuracy and ECE in all 8 cases**, as well as a case of the toy experiment in the left side of Figure 3 in the manuscript. While the amount of improvement might be seen as small, we believe **this consistent improvement (9 out of 9) indicates strong evidence of the effectiveness of our theory-motivated rank-preserving constraint for OOD generalization**, which is complementary to the benefits from the FLYP or SD. Moreover, the amount of performance gain is sometimes significant, e.g., +1.6 in terms of accuracy of FT w/ and w/o OC, -0.01 in terms of ECE of FLYP-WiSE w/ and w/o OC.
Please do not hesitate to ask us any further questions; we will be delighted to have an extended discussion with the reviewer g5ML.
---
### reference
[Feng et al. 2022] Rank Diminishing in Deep Neural Networks
---
Rebuttal Comment 2.1:
Comment: Thank you for addressing my queries. I have no further questions or comments. I would like to keep my score for now.
---
Reply to Comment 2.1.1:
Title: Further Suggestions for Improvement
Comment: Thank you for your detailed feedback and inquiry so far!
Your comments were highly valuable to us. We have thoroughly addressed the issues raised by Reviewer g5ML.
We believe these revisions will enhance our manuscript significantly.
**We respectfully inquire if you could consider raising the score, given that all of your concerns are addressed.
If not, we are interested in understanding if there are any additional concerns or suggestions that we could address to improve the submission further.**
Again, we would like to express our gratitude for your commitment to reviewing our paper thoroughly and providing constructive feedback, which helps us refine our work. | Rebuttal 1:
Rebuttal: ## Summary of Rebuttal
*We sincerely thank all four reviewers for their constructive feedback and valuable comments.*
**The strengths of our work, as highlighted by reviews:**
* The motivation behind this work is clear, addressing an important but under-explored problem.
* There is a strong connection between theoretical findings and the proposed method, which is simple and intuitive.
* The experimental setting is well-chosen, and the extensive results, coupled with theoretical analysis, demonstrate the method's effectiveness.
**Our responses to the reviews:**
* **[Details on orthogonality constraint]** We provided an intuitive interpretation of our theoretical finding about the connection between the smallest singluar value and out-of-distribution (OOD) generalization. We also justified the implementation of orthogonal constraint and demonstrated its effectiveness through extended ablation studies.
* R1(g5ML)-1(a), R3(a8to)-3, R4(yVZD)-4
* **[Details on EMA SD]** We explained the design motivation for EMA SD with results of extended ablation studies.
* R4(g5ML)-1
* **[Generality of method]** We demonstrated the general applicability of our method by expanding the experimental setup to include vision-only models, multi-source domain generalization, and synthetic shift settings.
* R1(g5ML)-3, R2(nZD2)-1, R2(nZD2)-3, R4(yVZD)-2
* **[Repeated experiments]** We validated our results by repeating the experiments with multiple seeds.
* R1(g5ML)-2, R4(yVZD)-3
* **[Elaboration on results]** We emphasized our performance improvement and provided interpretations of the results.
* R2(nZD2)-1, R2(nZD2)-2
* **[Elaboration on theorem]** We justified the assumptions of Theorem 3.1.
* R1(g5ML)-1(b), R3(a8to)-2
* **[Notation]** We clarified our notations and terms.
* R2(nZD2)-4, R3(a8to)-1
## DomainBed Setup
> To address reviewers' concerns, we conduct several experiments. Among them, we elaborate the setup for "CaRot for the **vision-only pre-trained model** fine-tuning on the **multi-source (train) domains**", which received the most inqueries.
* We validate the applicability of CaRot to the vision-only pre-trained model, `DINOv2 ViT-base` [10], on two represenative domain generalization benchmarks, `PACS` and `VLCS`. Each dataset consists of four domains with the same class labels (i.e., covariate shift). Following the leave-one-out training setting, we trained the model on three domains and tested it on the unseen remaining domain.
* We consider eight baseline methods [24-31], and we validate the CaRot objective by incorporating the orthogonal constraint term and self-distillation with exponential moving average term into the training pipeline of `ERM++` [31].
* By following DomainBed [11], we set a fixed hyperparameter sweep budget of 10 for all baseline methods. Each hyperparameter configuration was run three times, resulting in $120 = 10\text{(hyperparameter)} * 3\text{(seed)} *4 \text{(domain)}$ runs per algorithm on a dataset.
* Among three types of model selection of DomainBed, we report resulst with training domain validation and test-domain validation (oracle) selection strategies.
## Reference
1. Assessing the downstream performance of pretrained self-supervised representations by their rank, Garrido et al. 2023
2. Self-Supervised Learning via Redundancy Reduction, Zbontar et al. 2021
3. Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes et al. 2022
4. A theory of learning from different domains, Ben-David et al. 2010
5. First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains, Dong and Ma 2022
6. Robust fine-tuning of zero-shot models, Wortsman et al. 2022
7. Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution, Kumar et al. 2022
8. Improved finetuning of zero-shot vision models, Goyal et al. 2023
9. Robust Fine-Tuning of Zero-Shot Models Using Random Text Guidance, Nam et al. 2024
10. Learning Robust Visual Features without Supervision, Oquab et al. 2023
11. In Search of Lost Domain Generalization, Gulrajani and Lopez-Paz 2020
12. Measuring Robustness to Natural Distribution Shifts in Image Classification, Taori et al. 2020
13. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, Hendrycks and Dietterich, 2019
14. Self-Distillation as Instance-Specific Label Smoothing, Zhang and Sabuncu 2020
15. Bootstrap Your Own Latent A New Approach to Self-Supervised Learning, Grill et al. 2020
16. A General Framework for Self-supervised Learning in Speech, Vision and Language, Baevskil et al. 2022
17. Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning, Song et al. 2023
18. Vision and Language Representation Learning with Momentum Distillation, Li et al. 2021
19. DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES, Lee et al. 2018
20. DEEP CONVOLUTIONAL NETWORKS AS SHALLOW GAUSSIAN PROCESSES, Garriga-Alonso et al. 2019
21. Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes, Yang 2021
22. Scaling Vision Transformers to 22 Billion Parameters, Dehghani et al. 2023
23. On the infinite-depth limit of finite-width neural networks, Hayou 2023
24. Statistical learning theory, Vapnik 1998
25. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization, Sagawa et al. 2019
26. Self-supervised Contrastive Regularization for Domain Generalization, Kim et al. 2021
27. Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization, Ahuja et al. 2022
28. Optimal Representations for Covariate Shift, Ruan et al. 2022
29. Invariant Causal Mechanisms through Distribution Matching, Chevalley et al. 2022
30. Probable Domain Generalization via Quantile Risk Minimization, Eastwood et al. 2022
31. An Improved Baseline for Domain Generalization, Teterwak et al. 2023
Pdf: /pdf/ad82546f62e727290e15ef62c22ce80776cd4267.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks | Accept (poster) | Summary: This paper analyzes graph homophily and distinguishes three types of homophily: label, structural, and feature homophily. Theoretical analysis is based on CSBM-3H (contextual stochastic block model with three types of homophily). Based on this analysis, a new combined measure Tri-Hom is proposed. The relation between GNN performance and three types of homophily is empirically analyzed on CSBM-3H. Then, the agreement of Tri-Hom with GNN performance is analyzed using several real datasets. It is shown that the proposed homophily measure better agrees with the GNN performance than other existing measures.
Strengths: 1. The proposed model CSBM-3H is more flexible and models more complicated relations between graph structure, node features, and node labels than other typically used models. Such a model can be used as a synthetic benchmark in research papers.
2. Theoretical analysis shows how different homophily aspects relate to the distinguishability of aggregated and non-aggregated node features.
Weaknesses: 1. There seems to be a terminological inconsistency in the text. In the abstract, it is written that "graph homophily refers to the phenomenon that connected nodes tend to share similar characteristics." However, structural homophily introduced in Section 3.2 is about a different phenomenon: nodes from the same class tend to have similar neighbors (but they can be not connected). In some previous papers, e.g. in [31], these concepts are distinguished, and LI is not called a homophily measure.
2. The particular form of structural homophily (2) is not motivated. In Section 3.2, it is written that there are several existing structural homophily measures, but they are not chosen. Why the proposed expression (2) is better?
3. Similarly, there is not much motivation for the feature homophily proposed in (3). More motivation on why this diffusion process is supposed to reflect homophily would be helpful. In particular, (3) seems to assume a particular form of feature propagation. It is unclear how limiting this assumption is.
4. Throughout the paper, the class sizes are assumed to be balanced. In line 192, it is written that this is done without loss of generality, but it is not clear why so. In particular, several papers previously discussed that node/edge homophily is not a suitable measure for class-imbalanced datasets. In other words, class balance plays a crucial role for homophily measures and thus considering only balanced classes is a significant limitation. One of the examples where class balance seems to be critical is Theorem 2.2: here the threshold 1/C holds only for balanced classes.
Some typos:
- Line 12: "element" -> "elements"
- Line 61: "i.i.d.assumption" — missing space
- Line 62: "Compared previous" -> "Compared to the previous"
- Line 69: "In addition, Our" -> "In addition, our"
- Line 82: It is written that $I_E$ is a matrix, in this case, it should be from $R^{E\times E}$
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Section 3.1, many label homophily measures are introduced. Which one is chosen by the authors?
2. In equation (2), how exactly $\sigma_{max}$ is computed?
3. Could you give more details/motivation about the difference between label and structural homophily (in general)? Since both are expected to describe the relation between structure and labels.
4. In equation (5), it would be useful to discuss whether the obtained values $h$ are comparable across features of different natures. In particular, how these values behave when we do certain feature transformations: shifts, scaling, or changing variance. These would give more intuition about the proposed form of feature homophily.
5. How performance is measured in Table 1?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Some of the limitations are discussed in Section 6. I agree with the authors that societal impact is irrelevant for this type of work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reviews. We have fixed the typos and revised our manuscripts accordingly. Here are our responses to your concerns.
## Part 1/4
## W1: Terminological inconsistency of graph homophily and structural homophily.
## RW1:
Graph homophily is a general concept that describes the phenomenon of connected nodes tending to have similar characteristics. However, in the current graph community, the concept of homophily can sometimes be generalized to describe relations beyond connected nodes, such as Aggregation homophily [9], Coleman's homophily [10], and Preference-aware homophily [11]. The proposed structural homophily measures the consistency of structural information within intra-class nodes, which is crucial for the disentanglement of graph homophily. There indeed exists potential confusion and we will clarify their relations in the revised version.
## W2: 1) The particular form of structural homophily (2) is not motivated. 2) Why not choose other existing structural homophily measures and 3) why the proposed expression (2) is better?
## RW2:
1) As mentioned in Line 109, Section 3.1, label homophily focuses solely on the consistency of label information for connected nodes, while neglecting structural information. This limitation leads to a partial understanding of graph homophily and results in the misalignment of label homophily with GNNs’ performance [1]. To address this, we propose structural homophily $h_S$ in Section 3.2.
2) The proposed $h_S$ in Eq. (2) is based on the sampling process, unlike the existing structural-based metric [7,8]. This enables $h_S$ to be incorporated into CSBM-3H to analyze the impact on graph-aware models. Furthermore, $h_S$ provides a general form that can be applied to broad settings. For example, to extend $h_S$ to multi-hop neighbors, we can revise $\mathcal{S}(\cdot)$ in Eq. (2), a flexibility not available with other metrics.
3) In our experiments, the effectiveness of the proposed Eq. (2) is demonstrated in Table 5 in Appendix D.9, where we measure the correlation of metrics with the performance gap between GNNs and MLPs. $h_S$ shows the best correlation compared with other structural-based homophily metrics. Additionally, we explore how $h_S$ influences accuracy on graphs with respect to each class in Figure 6. The findings indicate a general tendency of increased accuracy with higher $h_S$.
## W3: 1) The motivation for the feature homophily proposed in (3). 2) Why this diffusion process is supposed to reflect homophily would be helpful. 3) In particular, Eq. (3) seems to assume a particular form of feature propagation. It is unclear how limiting this assumption is.
## RW3:
1) We would like to clarify the motivation for feature homophily $h_F$ in the paper. In Section 3.3, we list all the current feature-based homophily metrics, which cannot disentangle themselves from label homophily, leading to redundancy and a decrease in useful information within the feature aspect. Based on this motivation, we propose $h_F$ to fully disentangle the feature aspect. The $h_F$ is invariant with respect to both label and structural homophily, thereby dissociating it from these two types of homophily. This is part of our main contributions, which disentangles graph homophily from three aspects, better aligns with GNN performance, and helps explain other interesting but under-explored phenomena of graph homophily in previous studies [1, 2, 3].
2) In Line 152, we explain the meaning of $h_F$ through the sign of $\omega$. A positive, negative, or zero $\omega$ corresponds to an attractive relation, a repulsive relation, or independence of the nodes with their neighbors, respectively. Specifically, consider a case in social networks where people interact with each other. A positive $h_F$ refers to users influencing their neighbors to adopt the same opinions as theirs. A negative $h_F$ refers to users arguing with each other while more strongly holding their opinions to oppose others. A zero $h_F$ indicates that interactions among users do not change their opinions. This process in social networks is generally referred to as a diffusion process [4], which could reflect the $h_F$ defined in Eq. (3).
3) In Appendix B, we discuss why the assumption of a diffusion process makes sense. In Eq. (28), we show that the structural-aware features are updated by $\frac{\partial\boldsymbol{X}(t)}{\partial t}$, which contains $\boldsymbol{X}(0)$ (ego-node features) and $(\mathcal{F}(\boldsymbol{A}) - \boldsymbol{I})\boldsymbol{X}(t-1)$ (influences of the graph topology). We use linear dependency to model this feature dependency function $\mathcal{F}(\boldsymbol{X}) = \omega\boldsymbol{A}$ following [5] to analyze graph homophily with GNN performance. To the best of our knowledge, we are the first to consider feature dependencies to analyze GNN performance under CSBMs. In contrast, previous studies [1, 2, 3, 6], which assume no feature dependencies during graph modeling, oversimplify by completely ignoring feature dependency.
## W4: Throughout the paper, the class sizes are assumed to be balanced. In line 192, it is written that this is done without loss of generality, but it is not clear why so.
## RW4:
This paper aims to disentangle graph homophily from three aspects and investigate its overall impact at the graph level. While in-balanced classes do influence the graph, this influence does not affect the impact of graph homophily, as shown in previous studies [2, 6]. Consequently, the class-balanced assumption has little effect on our findings and doesn't hurt our main contributions. Our observations on CSBM-3H also show the setting of in-balanced class sizes doesn't give us extra information.
---
Rebuttal 2:
Title: Part 2/4
Comment: ## Q1: In Section 3.1, many label homophily measures are introduced. Which one is chosen by the authors?
## RQ1:
As described in Footnote 6 on Page 8, we use node homophily as label homophily in our experiments.
## Q2: In equation (2), how exactly is $\sigma_{max}$ computed?
## RQ2:
We measure the standard deviation of structural information in Eq. (2), the value of $\sigma_{max}$ depends on which function of structural information $\mathcal{S}(\cdot)$ is select. As mentioned in Line 127, we use the class distribution of local neighbors $\boldsymbol{D}^{\mathcal{N}}$ as the $\mathcal{S}(\cdot)$. The maximum variance is achieved when $\boldsymbol{D}^{\mathcal{N}}$ is the most diverse for nodes within the same class, which is equal to $\sqrt{\frac{C-1}{C^2}}$ as we used in our experiments. Let me briefly introduce how exactly $\sigma_{max}$ is calculated in this scenario. This question is equal to:
Given $P\in\mathbb{R}^{N\times C}$ is a neighbor sampling probability matrix, where $p_{i,k}\in[0,1]$ and $\sum_{k=1}^C p_{i,k}=1$ for each i, we need to maximize: $\sigma_{max}=\frac{1}{C}\sum_{k=1}^C\sigma(p_{:,k})$.
We first rewrite the $\sigma(p_{:,k})$ as
\begin{equation}
\sigma(p_{:,k}) = \sqrt{\frac{1}{N}\sum_{i=1}^N p_{i,k}^2 - \mu_k^2}
\end{equation}
where $\mu_k=\frac{1}{N}\sum_{i=1}^N p_{i,k}$
To maximize $\frac{1}{N}\sum_{i=1}^N p_{i,k}^2 - \mu_k^2$, we need the values to be as unequal as possible given the constraint $\sum_{k=1}^C p_{i,k}=1$. To get maximally unequal distribution, we have $p_{:,k}$ has one entry that is $1$ and the rest $0$ for each column $k$.
Then, let $a_k$ be the number of $1$ in column k, we have $\frac{1}{N}\sum_{i=1}^N p_{i,k}^2=\frac{a_k}{N}$ and $\mu_k = \frac{a_k}{N}$. Next, we can express the original expression as
\begin{equation}
\frac{1}{C}\sum_{k=1}^C\sigma(p_{:,k}) = \frac{1}{C}\sum_{k=1}^C \sqrt{ \frac{a_k}{N} - (\frac{a_k}{N})^2 }
\end{equation}
We observe that this is a non-convex function. According to Jensen’s inequality, we have
\begin{equation}
\frac{1}{C}\sum_{k=1}^C \sqrt{ \frac{a_k}{N} - (\frac{a_k}{N})^2 } \le \sqrt{ \frac{\frac{1}{C}\sum_{k=1}^Ca_k}{N} - (\frac{\frac{1}{C}\sum_{k=1}^C a_k}{N})^2 } = \sqrt{\frac{1}{C}-\left(\frac{1}{C}\right)^2}= \sqrt{\frac{C-1}{C^2}}
\end{equation}
where we get $\sigma_{max} = \sqrt{\frac{C-1}{C^2}}$ proved.
Note that this $\sigma_{max}$ only applies when using the class distribution of local neighbors, $\boldsymbol{D}^{\mathcal{N}}$, as the function of structural information $\mathcal{S}(\cdot)$. Different functions may yield different values for $\sigma_{max}$.
## Q3: Could you give more details/motivation about the difference between label and structural homophily (in general)? Since both are expected to describe the relation between structure and labels.
## RQ3:
We highlight two key differences between label homophily ($h_L$) and structural homophily ($h_S$). First, the "atom" information, as described in line 116, for $h_L$ is the label information $Y$, while for $h_S$ it is the structural information $D^\mathcal{N}$ (where we use neighbor distribution in this paper). Second, the measurement function for $h_L$ is an indicator function of two nodes connected by topology, $\mathbb{1}(Y_u=Y_v)$, where $e_{uv}\in\mathcal{E}$. In contrast, for $h_S$, it is the consistency function between two nodes connected by their classes, $(D^\mathcal{N}_u - D^\mathcal{N}_v)^2$, where $Y_u=Y_v$ (by rewriting Eq. (2)). Each of these metrics reflects a unique aspect that the other cannot capture.
To further illustrate these differences, we provide diagrams and visualizations of three types of homophily in the author rebuttal PDF. Figures 2(a), (b), and (c) show that an increase in $h_L$ enhances the connectivity of intra-class nodes. However, even when $h_L$ is low, sometimes the performance of the GNN can still be satisfactory. This is because $h_L$ does not account for the consistency of structural information among intra-class nodes. To address this, we propose $h_S$, which captures this aspect. Figures 2(d), (e), and (f) demonstrate that an increase in $h_S$ improves the informativeness of the neighbors of intra-class nodes, making the graph resemble planar and periodic graphs.
Thus, $h_L$ and $h_S$ represent graph homophily from label and structural perspectives, respectively. Along with feature homophily ($h_F$), they provide a comprehensive understanding of graph homophily. Additional explanations of $h_L$ and $h_S$ are provided in our responses in RW2 and author rebuttal.
---
Rebuttal 3:
Title: Part 3/4
Comment: ## Q4: In equation (5), how these values behave when we do certain feature transformations: shifts, scaling, or changing variance?
## RQ4:
Thank you for raising the question about the feature transformations on feature homophily in Eq. (5). We need to clarify that the Eq. (4)(5) are the definition of feature homophily and the measurement are introduced in Eq. (6). Then we will prove how this estimation of feature homophily in Eq. (6) is invariant to feature transformation including shifts, scaling, or changing variance as follows.
We first define the problem: In a graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ with $N$ nodes, there are node labels $\boldsymbol{Y}$ and node features $\boldsymbol{X_{:,m}}$ in dimension $m$. We can estimate the feature homophily $h^*_{F,m}$ for feature in dimension $m$ as:
\begin{equation}
h_{F,m}^{*}(\mathcal{G},\boldsymbol{X_{:,m}},\boldsymbol{Y}) = \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[X_{u,m}(0)-X_{v,m}(0)\right]^2, \; \text{ where } \boldsymbol{X_{:,m}}(0) = \left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)\boldsymbol{X_{:,m}}
\end{equation}
We need to prove the estimation of feature homophily \textbf{is invariant to the operations of feature shifts, scaling, or changing variance.}
**A. Shifts**
Let's consider a shift of node features $\boldsymbol{X_{:,m}}$ by a constant vector $\boldsymbol{C}$:
\begin{equation}
\boldsymbol{X_{:,m}'} = \boldsymbol{X_{:,m}}+\boldsymbol{C}
\end{equation}
Then we have structural-agnostic features as
$\boldsymbol{X_{:,m}'}(0)
= \left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)\boldsymbol{X_{:,m}'}$
$= \left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)(\boldsymbol{X_{:,m}}+\boldsymbol{C})$
$=\boldsymbol{X_{:,m}}(0)+\left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)\boldsymbol{C}$
Then a new estimation of $h_{F,m}'$ under this feature shift can be expressed as
$h_{F,m}'(\mathcal{G},\boldsymbol{X_{:,m}'},\boldsymbol{Y}) = \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[X_{u,m}'(0)-X_{v,m}'(0)\right]^2$
$= \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[\left(X_{u,m}(0)+\left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)C_u\right)-\left(X_{v,m}(0)+\left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)C_v\right)\right]^2$
$= \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[\left(X_{u,m}(0)-X_{v,m}(0)\right)+\left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)(C_u-C_v)\right]^2$
Since $\boldsymbol{C}$ is a constant vector, we have $C_u-C_v=0$. Next, we have
$h_{F,m}'(\mathcal{G},\boldsymbol{X_{:,m}'},\boldsymbol{Y}) = \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[\left(X_{u,m}(0)-X_{v,m}(0)\right)\right]^2 = h_{F,m}^{*}(\mathcal{G},\boldsymbol{X_{:,m}},\boldsymbol{Y})$
Therefore, we proved the estimation of feature homophily is invariant to the operation of feature shifts.
**B. Scaling**
Let's consider scaling node features $\boldsymbol{X_{:,m}}$ by a constant $\alpha$:
\begin{equation}
\boldsymbol{X_{:,m}'} = \alpha\boldsymbol{X_{:,m}}
\end{equation}
Then we have structural-agnostic features as
$\boldsymbol{X_{:,m}'}(0)
= \left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)(\alpha\boldsymbol{X_{:,m}'})$
$= \alpha\left(\boldsymbol{I}-\frac{h_{F,m}}{\rho(\boldsymbol{A})}\boldsymbol{A}\right)\boldsymbol{X_{:,m}'}$
$= \alpha\boldsymbol{X_{:,m}}(0)$
Then a new estimation of $h_{F,m}'$ under this feature shift can be expressed as
$h_{F,m}'(\mathcal{G},\boldsymbol{X_{:,m}'},\boldsymbol{Y}) = \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[X_{u,m}'(0)-X_{v,m}'(0)\right]^2$
$= \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \alpha^2\left[X_{u,m}(0)-X_{v,m}(0)\right]^2$
$= \text{arg} \min_{h_{F,m}} \alpha^2 \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[X_{u,m}(0)-X_{v,m}(0)\right]^2$
Since $\text{arg} \min_{x} (\cdot)$ is invariant to the scaling, e.g. $\text{arg} \min_{x}(cf(x))=\text{arg} \min_{x}(f(x))$, we have
$h_{F,m}'(\mathcal{G},\boldsymbol{X_{:,m}'},\boldsymbol{Y}) = \text{arg} \min_{h_{F,m}} \sum_{\substack{u,v\in\mathcal{V},\\ Y_u=Y_v}} \left[\left(X_{u,m}(0)-X_{v,m}(0)\right)\right]^2 = h_{F,m}^{*}(\mathcal{G},\boldsymbol{X_{:,m}},\boldsymbol{Y})$
Therefore, we proved the estimation of feature homophily is invariant to the operation of feature scaling.
---
Rebuttal 4:
Title: Part 4/4
Comment: **C. Variance Changing**
Changing the variance of $\boldsymbol{X_{:,m}}$ can be seen as the combination of scaling and shifts. Assume node features follow a Gaussian distribution $N(\mu,\sigma^2)$, after the operation of changing the variance from $\sigma^2$ to $\beta\sigma^2$, we have new node features as
\begin{equation}
\boldsymbol{X_{:,m}'} = \sqrt{\beta}(\boldsymbol{X_{:,m}}-\mu)+\mu
\end{equation}
where $\boldsymbol{X_{:,m}'}$ is calculated by deducing $\mu$, multiplying $\sqrt{\beta}$, and adding $\mu$. We already show the estimation of feature homophily is invariant to the operations of feature shifts and scaling. Since the operation of variance changing is a combination of scaling and shifts, we can conclude that the estimation of feature homophily is invariant to the variance changing. We will add these proofs to the Appendix later.
## Q5: How performance is measured in Table 1?
## RQ5:
The measurement of the performance in Table 1 is introduced in Line 330, where we show the Pearson correlation between all the metrics and model performance on the 31 real-world datasets. Specifically, each cell in Table 1 denotes a correlation value, which is calculated using the corresponding homophily metrics and model performance on 31 datasets. For example, for $h_{edge}$ on GCN, we measure the $h_{edge}$ (in Table 3) and the performance of GCN (in Table 4) on 31 datasets. Then, we have 31 values for both the homophily metric and model performance. Finally, we calculate the correlation between these two sets of data, which reflects how well the homophily metric aligns with model performance. We hope this clarification helps in understanding the performance shown in Table 1.
**References**
[1] Ma, Y., Liu, X., Shah, N., Tang, J. Is Homophily a Necessity for Graph Neural Networks? In ICLR, 2022.
[2] Wang, J., Guo, Y., Yang, L., Wang, Y. Understanding Heterophily for Graph Neural Networks. CoRR abs/2401.09125, 2024.
[3] Lee, S. Y., Kim, S., Bu, F., Yoo, J., Tang, J., Shin, K. Feature Distribution on Graph Topology Mediates the Effect of Graph Convolution: Homophily Perspective. CoRR abs/2402.04621, 2024.
[4] Chang, B., Xu, T., Liu, Q., et al. Study on Information Diffusion Analysis in Social Networks and Its Applications. Int. J. Autom. Comput. 15, 377–401, 2018.
[5] Shi, D., Han, A., Lin, L., Guo, Y., Wang, Z., Gao, J. Design Your Own Universe: A Physics-Informed Agnostic Method for Enhancing Graph Neural Networks. arXiv:2401.14580, 2024.
[6] Luan, S., Hua, C., Xu, M., Lu, Q., Zhu, J., Chang, X.-W., Fu, J., Leskovec, J., Precup, D. When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. In NeurIPS, 36, 2024.
[7] S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup. Revisiting heterophily for graph neural networks. In NeurIPS, 2022.
[8] O. Platonov, D. Kuznedelev, A. Babenko, and L. Prokhorenkova. Characterizing graph datasets for node classification: Homophily-heterophily dichotomy and beyond. In NeurIPS, 2024.
[9] Luan S, Hua C, Lu Q, et al. Revisiting heterophily for graph neural networks[J]. In NeurIPS, 2022.
[10] Coleman J S. Relational analysis: The study of social organizations with survey methods[J]. Human organization, 1958.
[11] Jiang W, Gao X, Xu G, et al. Challenging Low Homophily in Social Recommendation[C], Proceedings of the ACM on Web Conference 2024.
---
Rebuttal Comment 4.1:
Comment: Thank you for your detailed response! I have several questions to clarify.
W4. Could you please provide more details on why class-balanced assumption does not affect the results and conclusions? In particular, could you please clarify whether the following is true: “One of the examples where class balance seems to be critical is Theorem 2.2: here the threshold 1/C holds only for balanced classes”?
Q4. Thanks for the detailed response! Just to clarify – is it true that this invariance holds for the estimate (6) but may not hold for the original definition (or it is hard to prove it there)?
Q5. Here my question was about “model performance” – which measure is used here?
---
Reply to Comment 4.1.1:
Title: Reply 2 nd rebuttal
Comment: Please let me know if my responses have addressed your concerns. Thank you!
---
Rebuttal 5:
Title: Rebuttal - 2nd Response
Comment: ## W4
Could you please provide more details on why class-balanced assumption does not affect the results and conclusions? In particular, could you please clarify whether the following is true: “One of the examples where class balance seems to be critical is Theorem 2.2: here the threshold 1/C holds only for balanced classes”?
## RW4 - 2nd
Thank you for your additional review comments. You are correct that the $\frac{1}{C}$ factor in Theorem 2.2 is derived under the assumption of balanced datasets, as many current studies do not adequately address the challenges posed by unbalanced data [1, 2, 3, 7, 8]. Given that handling unbalanced data is not the primary focus of our current work and the absence of a standard benchmark for imbalanced data in heterophilous graphs, we conducted a preliminary experiment using synthetic datasets with imbalanced data. The experimental results, presented in the table below, suggest that the conclusions of Theorem 2.2 remain valid and are not significantly impacted by data imbalance. However, we agree that addressing unbalanced data will be crucial in future work, particularly for model design.
The table below illustrates how varying $h_L$ impacts GCN performance on a node classification task. We examine a scenario with 3 classes and 1,000 nodes, reporting the accuracy and standard deviation across 5 random runs. The cases are: 1) balanced classes (33\%/33\%/33\%), 2) imbalanced classes case 1 (60\%/30\%/10\%), and 3) imbalanced classes case 2 (80\%/10\%/10\%).
| $h_L$ | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Balanced | 87.44±4.14 |74.32±4.83 |60.72±6.77 |**53.44±6.90** |58.56±4.53 |72.56±3.69 |81.12±2.36 |90.08±4.13 |96.80±1.47 |98.00±1.36 |99.36±0.61 |
| In-balanced 1 | 90.16±3.18 |83.12±3.50 |71.60±7.81 |**63.28±4.06** |68.24±5.21 |76.88±4.11 |83.04±2.91 |88.80±3.31 |92.56±2.63 |94.64±2.27 |97.84±1.69 |
| In-balanced 2 | 80.92±3.98 |80.00±1.62 |**77.12±0.52** |78.96±2.52 |79.36±2.66 |80.36±2.94 |82.68±4.54 |85.76±5.37 |92.80±2.67 |94.12±2.45 |96.40±1.90 |
The results demonstrate that as $h_L$ increases, GCN performance initially declines and then improves across both balanced and imbalanced datasets. This trend aligns with the conclusions of Theorem 2.2, which describe that GNN performance reaches its lowest point at mid-level values of $h_L$. Therefore, under the in-balanced classes, the conclusions from Theorem 2.2 still hold.
## Q4
Thanks for the detailed response! Just to clarify – is it true that this invariance holds for the estimate (6) but may not hold for the original definition (or it is hard to prove it there)?
## RQ4 - 2nd
Yes, it is hard to prove the original definition. As indicated in line 167, both $h_{F,m}$ and $\boldsymbol{X_{:,m}(0)}$ are unknown, making it impossible to measure homophily directly from Eq. (4). To overcome this challenge, we minimize Eq. (6) to estimate $h_{F,m}$, leveraging the fact that the intra-class distances of $\boldsymbol{X_{:,m}(0)}$ are small.
To address the concern about the accuracy of the estimation in Eq. (6), an experiment has been conducted to evaluate the accuracy of this estimation on synthetic datasets. For a given original $h_F$, the estimated $h_F^*$ using Eq. (6) and report the differences $|h_F - h_F^*|$ are tabulated below.
| Original $h_F$ | -1.00 | -0.80 | -0.60 | -0.40 | -0.20 | 0.00 | 0.20 | 0.40 | 0.60 | 0.80 | 1.00 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Estimated $h_F*$ | -0.95 | -0.80 | -0.62 | -0.42 | -0.21 | -0.01 | 0.20 | 0.41 | 0.61 | 0.86 | 1.00 |
| Differences $\|h_F-h_F*\|$ | 0.05 | 0.00 | 0.02 | 0.02 | 0.01 | 0.01 | 0.00 | 0.01 | 0.01 | 0.06 | 0.00
The results show that the average difference is less than 2\%. In our view, this is an acceptable estimation.
## Q5
Here my question was about “model performance” – which measure is used here?
## RQ5 - 2nd
Thank you for clarifying this with us. We use the **accuracy** for datasets with more than two classes and the **AUC-ROC** for binary-class datasets to evaluate the model performance on the node classification. This measurement has been widely used for assessing GNN performance [1, 2, 3, 6, 7, 8].
Thank you for your valuable suggestions! We will revise our paper accordingly. | Summary: The paper combines three different metrics (label, structural, and feature) for homophily proposing a Contextual Stochastic Block Model (CSBM-3H) describing these three types of homophily. By doing so it can control the topology and feature generation based on these three homophily metrics. Furthermore, there exists an extensive theoretical analysis of CSBM-3H, also defining new composite metric, named Tri-Hom, that considers all three homophily aspects and overcomes the limitations of the previously proposed and conventional homophily metrics. Importantly, they validate the correlation of model performance (under node classification) based on experimental set-up including 31 real-world benchmark datasets, outperforming various baseline metrics.
Strengths: 1) The paper is interesting, and combining all three homophily metrics into one unified metric is novel.
2) The theoretical analysis of the proposed method is robust. The paper introduces three theorems that establish connections to important aspects verified in the literature, thereby unifying multiple works under a single framework.
3) The proposed Contextual Stochastic Block Model, which incorporates three types of homophily, is innovative and could be highly useful for benchmarking general models under varying types and levels of homophily.
4) The experimental section is extensive, including 31 datasets, and yields promising results.
Weaknesses: 1) The paper validates performance only under the node classification task. It would be very interesting to also validate Tri-Hom for the task of link prediction.
2) A minor weakness is that some results in Table 1 for certain baselines are quite close, with only marginal improvements. Could the authors comment on this phenomenon? Why might this be the case, and could it indicate that for some models or networks, having only one homophily metric would be sufficient?
Technical Quality: 3
Clarity: 3
Questions for Authors: Would it be possible to visualize the generated SBM networks under different levels of homophily and observe the resulting block structures with respect to all different types of homophily?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors should discuss limitations of their paper and proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Part 1/2
## W1: The paper validates performance only under the node classification task. It would be very interesting to also validate Tri-Hom for the task of link prediction.
## RW1
Thanks for your valuable suggestions. It is a good topic to evaluate Tri-Hom and other homophily metrics for link prediction. We only include the results on node classifications in the paper because it is the most classical task on GNNs and most previous studies focus on node classification. Further more, other than space limitation, to validate Tri-Hom for link prediction, there are several challenges that we need to tackle:
**Missing of node labels.** Not all the graphs for the task of link prediction are associated with node labels. However, most of the homophily metrics, as well as our proposed Tri-Hom, are measured based on node labels. It is infeasible to directly apply these metrics on these graphs without labels and more task-oriented adaptations are needed.
**Task Goal.** The goal of node classification is to learn distinguishable node representations, while the goal of link prediction is to predict the likelihood of future or missing links by leveraging reliable structural representations of the graph. This requires an understanding of existing connections and patterns within the graph. Therefore, it is hard to directly apply Tri-Hom on link prediction regarding the different goals in the task.
Apart from that, the main contribution of this paper is disentangling the graph homophily from 3 aspects. Previous theoretical studies [1, 2 ,3 ,4] of graph homophily analysis only consider a single aspect of node classification, while we measure the synergy of 3 types of homophily and successfully validate the theoretical results on both the synthetic and real-world datasets. Considering the challenges in link prediction, here we only measure Tri-Hom with node classification since it is the most widely explored task for GNN performance. In the future, we will explore the Tri-Hom with adaptions to deal with challenges in other types of graph tasks such as link prediction and graph clustering.
## W2: A minor weakness is that some results in Table 1 for certain baselines are quite close, with only marginal improvements. Could the authors comment on this phenomenon? Why might this be the case, and could it indicate that for some models or networks, having only one homophily metric would be sufficient?
## RW2
As shown in Table 1, the strongest baseline is class homophily ($h_{class}$), which underperforms our proposed Tri-Hom for graph-aware models $\mathcal{J}_h^{\mathcal{G}}$ on GNNs by an average gap of 4\%. It is not surprising that these label-based homophily metrics show strong performance because node labels contain the most important information for the task of node classification on graphs. Label-based homophily metrics outperform all other structural-based or feature-based metrics. The improvement of our metrics comes from the full consideration of all three aspects of graph homophily.
Although we measure performance on 31 real-world datasets, this number is insufficient to fully demonstrate the importance of feature homophily or structural homophily. As shown in Table 5, the distribution of these datasets across the three types of homophily does not vary significantly. Additionally, the Twitch datasets collected by [4] and some heterophilic graphs (Wisconsin, Cornell, Texas, Squirrel, Actor, Chameleon) collected by [5] exhibit high similarity in homophily levels. Therefore, more real-world datasets with varying levels of homophily, particularly for structural and feature homophily, are needed to test the effectiveness of homophily metrics comprehensively. We have attempted to mitigate the impact of limited dataset availability by testing on 31 different datasets.
Due to the scarcity of real-world datasets, we validate the performance of Tri-Hom on synthetic datasets, which offer a diverse range of homophily levels. As shown in Figure 1, the performance surface of Tri-Hom closely mirrors that of GNNs. It is evident that other types of homophily, based on a single factor, underperform compared to Tri-Hom in synthetic datasets, as they respond to only one type of homophily..
Furthermore, other studies [1, 2] that focus on homophily analysis only report results on 9 datasets, evaluating 'good' or 'bad' homophily on a case-by-case basis. Our work represents a significant advancement by fairly comparing homophily metrics across 31 real-world datasets with statistical significance.
---
Rebuttal Comment 1.1:
Title: Comment for the response
Comment: I would like to thank the authors for their effort in addressing the concerns and questions raised by me and my fellow reviewers. All of my issues/questions have been adequately addressed. I will therefore increase my score to accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are grateful for your recognition and solid support of our paper. Your insightful suggestions have greatly improved it.
---
Rebuttal 2:
Title: Part 2/2
Comment: ## Q1: Would it be possible to visualize the generated SBM networks under different levels of homophily and observe the resulting block structures with respect to all different types of homophily?
## RQ1:
Thank you for your constructive suggestions. We added the visualization of the generated graphs by CSBM-3H in Figure 2 of the submitted author rebuttal PDF. Based on this visualization, we draw the following conclusions:
**Label homophily.** As label homophily ($h_L$) increases, as shown in Figures 2(a), (b), and (c), nodes are more likely to connect with others that share the same label. Particularly, a high $h_L$ (Figure 2(c)) results in effective structural information, making it easier to delineate class boundaries. In comparison, a medium $h_L$ (Figure 2(b)) is less informative than an extremely low $h_L$ (Figure 2(a)). For instance, the absence of a red node among a red node's neighbors can help infer its class. This observation aligns with Theorem 2.1 in our paper.
**Structural homophily.** As structural homophily ($h_S$) increases, as shown in Figures 2(d), (e), and (f), neighbor distributions become more consistent. Consequently, a high $h_S$ allows us to capture effective structural information, as suggested by Theorem 2.2. Interestingly, we also find that a higher $h_S$ makes a graph resemble planar graphs [7] and periodic graphs [6]. We hypothesize this phenomenon as stable structural information leads to more regular and meaningful structural patterns. In future work, it would be interesting to explore the connection between $h_S$ and these geometric properties of graphs.
**Feature homophily.** Figure 2 (g), (h), and (i) illustrate different levels of feature homophily ($h_F$) within the same graph topology, where colors represent the distance of node features to various classes. Figure 2 (h) demonstrates that a medium positive $h_F$ causes the features of some boundary nodes to exhibit characteristics of neighboring classes. In contrast, a higher positive $h_F$ (Figure 2 (i)) increases feature dependencies, particularly affecting nodes closer to class boundaries. In real-world scenarios, a positive $h_F$ leads entities in a graph to show dependencies with their surrounding neighbors. For instance, people's opinions are influenced by their friends, resulting in similar characteristics. Conversely, a negative $h_F$ causes nodes to become more dissimilar from their neighbors. As shown in Figure 2 (g), a negative $h_F$ creates a distinct boundary between classes. Additionally, within the same classes in Figure 2 (g), nodes colors are different shades with their neighbors. This is because node features become more dissimilar due to the "repulsive force" rather than the "attractive force" induced by a negative $h_F$. In online media, people are likely to argue with those holding different opinions. After such interactions, individuals may reinforce their original opinions, a phenomenon resulting from the "repulsive force" associated with a negative $h_F$.
Finally, we have revised our manuscript according to your valuable reviews. Thank you.
## References
[1] Ma, Y., Liu, X., Shah, N., Tang, J. Is Homophily a Necessity for Graph Neural Networks? In ICLR, 2022.
[2] Luan, S., Hua, C., Xu, M., Lu, Q., Zhu, J., Chang, X.-W., Fu, J., Leskovec, J., Precup, D. When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. In NeurIPS, 2023.
[3] Wang, J., Guo, Y., Yang, L., Wang, Y. Understanding Heterophily for Graph Neural Networks. CoRR abs/2401.09125, 2024.
[4] Lee, S. Y., Kim, S., Bu, F., Yoo, J., Tang, J., Shin, K. Feature Distribution on Graph Topology Mediates the Effect of Graph Convolution: Homophily Perspective. CoRR abs/2402.04621, 2024.
[5] Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., Yang, B. Geom-GCN: Geometric Graph Convolutional Networks. In ICLR, 2020.
[6] Cohen, E., Megiddo, N. Recognizing properties of periodic graphs. In Applied Geometry and Discrete Mathematics, pp. 135-146, 1990.
[7] Barthelemy, M. Morphogenesis of Spatial Networks. Cham, Switzerland: Springer International Publishing, 2018. | Summary: The paper proposes a novel approach to understanding graph homophily by disentangling it into label, structural, and feature homophily. The introduction of the Tri-Hom metric combines these aspects to provide a more comprehensive measure of GNN performance. CSBM-3H is used to study the impact of these types of homophily. Extensive experimental results on synthetic and real-world datasets show the effectiveness of the findings.
Strengths: 1. The paper takes a comprehensive thought on graph homophily and provides with a more detailed understanding of homophily.
2. The introduction of Tri-Hom metric as well as and the CSBM-3H model which is based on the metric is novel.
3. The paper includes both theoretical analysis and empirical validation through synthetic and real-world datasets.
Weaknesses: The introduction of multiple new concepts and metrics might be overwhelming for readers who are not well-versed in graph theory and GNNs. Simplifying explanations or providing more intuitive examples could help.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. The definitions of structural and feature homophily are somewhat abstract. Could you provide more concrete examples or case studies to illustrate these concepts?
2. How to translate the finding to real-world applications?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Lack concrete examples for readers’ understanding.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Q1 The definitions of structural and feature homophily are somewhat abstract. Could you provide more concrete examples or case studies to illustrate these concepts?
## RQ1
Thanks for your valuable suggestions. In line 116 and 146, we show the differences of structural and feature homophily respectively. To help better understand these definitions, in author rebuttal PDF, we add diagrams of 3 types of homophily and visualize how these homophily metrics influences graphs generated by CSBM-3H. As shown in Figure 1, label homophily measures the label consistency along the graph topology, structural homophily measures the consistency of neighbor distribution of intra-class nodes, and feature homophily measures the feature dependencies along the graph topology. Each of the homophily shows a unique aspect. For example, in social networks, label homophily measures how likely the same types (age, hobby, and so on) of people are connected, structural homophily measures the consistency of the neighbors of certain type of people, and feature homophily measures how much the features of people are affected by their neighbors. For more concrete examples or explanations, please refer to the RQ2 of your second question.
## Q2 How to translate the finding to real-world applications?
## RQ2
Current real-world applications of graph homophily only focus on label homophily, which cannot align well with GNNs performance as shown in this paper and other studies [1, 2]. To address this weakness, our findings provide a comprehensive view from 3 types of homophily, which could be applied in many real-world applications, such as social networks, recommendation, and urban computing.
### A. Social Networks
In social networks, homophily is defined as the tendency for people to seek out or be drawn to others who are similar to themselves [3]. This definition primarily explains the consistency of certain characteristics of people within the network topology. Our proposed concepts of structural homophily and feature homophily offer additional insights into social networks.
Structural homophily refers to the similarity of the local neighbors of individuals of the same type, which can be used to analyze the friend circles of specific user groups. For instance, in fraud detection on social media, fraudsters often target older individuals who are more vulnerable to scams, resulting in a high level of structural homophily. Therefore, we can identify potential fraudsters based on their structural connections. However, structural information can vary and may not always be informative. If fraudsters randomly select users to contact, identifying them through their neighbors becomes challenging, leading to a low level of structural homophily. Future research could focus on measuring the level of structural homophily in social networks to better understand user behaviors.
Feature homophily, on the other hand, describes how individuals are influenced by their neighbors. Different types of networks exhibit varying levels of feature homophily. When people discuss similar events online, their opinions may be influenced by those they follow, leading to a higher similarity in features with their neighbors, indicating a high level of feature homophily. Conversely, during online arguments, individuals connect with those holding different opinions. After such interactions, they may reinforce their original viewpoints, resulting in greater dissimilarity with their neighbors, indicating a low level of feature homophily. Investigating user behavior through feature homophily can reveal underlying intentions and improve model performance. Furthermore, feature homophily provides valuable insights into the extent to which users are influenced by their neighbors.
### B. Recommendation
Previous studies [4, 5] on recommendation systems using graph homophily have primarily focused on label homophily, which may result in misalignment with model performance, similar to what occurs in purely homogeneous graphs. To address this issue, it would be beneficial to define structural homophily within recommendation systems. For instance, we can measure the structural information of users within the same community by assessing the consistency of the items they have purchased. This approach allows us to determine whether this topological information can effectively predict links between users and specific items.
### C. Urban Computing
A recent study [7] proposes a method to measure spatial graph homophily in urban computing using a spatial diversity score with direction-aware and distance-aware partitions. However, this metric focuses solely on label homophily, leaving a significant opportunity to explore structural homophily. Structural homophily involves measuring the consistency of geographic information among similar types of locations. For example, bookstores are often surrounded by coffee shops, where customers can enjoy coffee while reading books [6], indicating a high level of structural homophily. Conversely, convenience stores might be located near a high-end fashion boutique, a fast-food restaurant, or an office building. Since the geographic information does not reliably indicate the presence of a convenience store, it exhibits a low level of structural homophily. Future research could explore structural homophily in urban graphs to analyze the behaviors of various urban objects, aiding in better city planning.
Thank you for your constructive suggestions. We have revised the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author for addressing my questions. The "Diagrams of 3 types of homophily" you provide in the rebuttal pdf helps me to understand your model better. I will maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you for your positive rating and valuable suggestions.
---
Rebuttal 2:
Title: References
Comment: [1] Ma, Y., Liu, X., Shah, N., Tang, J. Is Homophily a Necessity for Graph Neural Networks? In ICLR, 2022.
[2] Luan, S., Hua, C., Xu, M., Lu, Q., Zhu, J., Chang, X.-W., Fu, J., Leskovec, J., Precup, D. When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. In NeurIPS, 2023.
[3] Khanam, K. Z., Srivastava, G., Mago, V. The homophily principle in social network analysis: A survey. Multimedia Tools and Applications, 82(6): 8811-8854, 2023.
[4] Jiang, W., Gao, X., Xu, G., et al. Challenging Low Homophily in Social Recommendation. In Proceedings of the ACM on Web Conference, pp. 3476-3484, 2024.
[5] Gholinejad, N., Chehreghani, M. H. Heterophily-Aware Fair Recommendation using Graph Convolutional Networks. arXiv preprint arXiv:2402.03365, 2024.
[6] Niche to Discount: 12 Major Types of Retail Stores \& Retailers, FounderJar, https://www.founderjar.com/types-of-retail-stores/, 2023
[7] Xiao, C., Zhou, J., Huang, J., Xu, T., Xiong, H. Spatial Heterophily Aware Graph Neural Networks. In KDD, pp. 2752-2763, 2023. | Summary: This paper evaluates graph homophily from the perspectives of label, structure, and feature, which disentangle the dependencies of these three aspects. The theoretical analysis and experimental evaluations demonstrate the effectiveness of Tri-Hom.
Strengths: 1. This paper is innovative and significant in evaluating graph properties from their central components, namely, label, feature, and structure.
2. This paper highlight the missing component on evaluating the graph homophily, marking it as novel compared to existing methods for designing graph learning methods and new evaluation metrics.
3. There have been extensive experiments conducted on synthetic and real-world datasets.
Weaknesses: 1. This paper lacks suggestions for designing models. New metrics are provided for evaluating graph homophily properties. However, as new models are continually proposed, it would be beneficial if the authors could provide some guidelines for designing models when tackling new datasets.
2. The connections and distinctions between the proposed metrics and the existing metrics from the label, feature, and structure perspectives need further clarification.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: please check the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Part 1/2
## Q1: Suggestions for designing models.
## RQ1
Thanks for your positive rating and constructive suggestions. It is interesting to see how our conclusions can guide the model design. Due to the page limitation, we did not share suggestions in model design. Here we show some guidelines for model designs and future directions from 3 perspectives: label homophily, structural homophily, and feature homophily as follows:
### A. Label homophily
We discussed how label homophily influences GCN and MLP, providing both theoretical proof (Theorem 2.1 in Line 232) and empirical experiments (the three sub-figures in the first row of Figure 4 in Appendix D.6). Our results show that GCN performs better than MLP in conditions of extremely low homophily (good heterophily [1]), but significantly worse than MLP in medium levels of homophily (mid-homophily pitfall [2]). This suggests that GCNs sometimes fail to extract effective topological information. To mitigate this weakness, it is preferable to add a residual connection to GNNs or introduce a learnable parameter that allows the model to balance graph-aware and graph-agnostic information. The necessity of residual connections has been verified in previous studies [3, 4, 5].
In addition to model design at the graph level, we can also consider a fine-grained approach at the node level. Our results indicate that GCN does not always outperform MLP, suggesting that different models could be applied to nodes with varying levels of label homophily. To our knowledge, this type of personalized design has rarely been explored in current GNN research on heterophilic graphs, yet it holds significant potential for improving overall model performance.
### B. Structural homophily
As mentioned in Theorem 2.2, the performance of graph-aware models improves with an increase in structural homophily. This leads to a crucial question: how can we deal with graphs exhibiting varying levels of structural homophily (i.e., the consistency of structural information among intra-class nodes)? To address this issue, we can enhance current GNNs using two approaches: message-passing calibration and graph rewriting.
For the message passing calibration, several methods, such as GPRGNN [13], FB-GNNs [14], and ACM-GNNs [15], propose adding an additional high-pass filter to capture local variations and details in the graph structure. When structural homophily is low, the high-pass filter captures the diversification of individual nodes. Along with a low-pass filter, these methods perform well on graphs with varying levels of homophily. However, the high-pass filters used in these methods cannot capture more complicated structural information. Since $\mathcal{S}(\cdot)$ in structural homophily can be any measurement of structural homophily, this metric can evaluate more complex graph structures. In the future, it is promising to design novel filters based on structural homophily to capture more intricate structural information and improve model performance.
For the graph structure rewriting, many methods (MVGCN [6], GloGNN [7], WRGNN [8]) propose deriving a new graph topology based on node features or embeddings. This operation improves the connectivity of nodes with similar semantic contexts, thereby enhancing model performance. However, this rewriting could connect nodes from different classes, which impedes GNN performance. To resolve this, we can measure class-wise structural homophily as shown in Eq. (2) and design adaptations for different classes, which will be particularly beneficial for class-imbalanced graphs, such as in bot detection and fraud detection. Furthermore, we can adapt the structural measurement function $\mathcal{S}(\cdot)$. Since most current structural rewriting methods do not evaluate the informativeness of their rewriting basis (node embeddings with structural information), the proposed structural homophily can serve as a metric to evaluate which types of rewriting basis to select. For example, Geom-GNN [9] uses Isomap [10], Poincare embedding [11], and struc2vec [12] to construct new neighbors of nodes and empirically determines the best approach. Our structural homophily could identify the most effective embedding approach before training GNNs. Therefore, structural homophily provides a guideline for graph rewriting methods.
### C. Feature homophily
In Appendix B, we thoroughly discuss the motivation behind feature homophily, where feature dependencies measure how node features are influenced by their neighbors. To our knowledge, only a few GNNs [16] consider these feature dependencies in their design. There is significant potential to explore how feature dependencies function in graphs. For instance, in social networks, people's opinions are affected by those around them. Identifying different user types while filtering out the noise introduced by their neighbors remains an open question. Both our theoretical results (Theorem 2.3) and empirical results (Figure 4) demonstrate the synergy between feature homophily and label homophily in enhancing model performance. Based on these findings, future work could focus on designing various graph filters to optimize the objective in Eq. (8) by considering both label and feature homophily. Furthermore, feature homophily can explain the presence of node features, making it worthwhile to investigate how much features are influenced by their neighbors, particularly in temporal graphs.
In conclusion, our findings offer valuable insights for designing models. Compared with previous studies that only analyze the single factor of label homophily [1, 2], structural homophily [17], or feature homophily [18] on GNNs, our work consider the synergy of all 3 types of homophily. We validated our theoretical results using both synthetic and real-world datasets. We believe this work represents a significant advancement on graph homophily and provides numerous intriguing directions for future research.
---
Rebuttal 2:
Title: Part 2/2
Comment: ## Q2: The connections and distinctions between the proposed metrics and the existing metrics from the label, feature, and structure perspectives need further clarification.
## RQ2
In lines 116 and 146, we highlight the differences between the proposed structural homophily and feature homophily. These definitions represent the basic elements of a graph: label, structural, and feature information, thereby providing a comprehensive understanding of graph homophily. Label homophily ($h_L$) describes the label consistency along the topology, which has been widely used in previous studies [2, 5, 7]. However, $h_L$ cannot capture the consistency of structural information among intra-class nodes, which also influences GNN performance. To address this limitation, we propose structural homophily ($h_S$) to describe the consistency of structural information among intra-class nodes. Unlike existing structural-based metrics [1, 14], our metric allows for any kind of structural measurement function and can be easily incorporated into CSBM-3H for analysis. Feature homophily ($h_F$) measures the feature consistency of nodes with their neighbors and can be fully disentangled from $h_L$ and $h_S$, something other feature-based metrics [18] cannot achieve (as shown in line 138). Furthermore, we have included diagrams and visualizations of the three types of homophily in the author rebuttal PDF to aid in understanding these definitions. According to your suggestions, we have revised our manuscript. Thank you.
### References
[1] Ma, Y., Liu, X., Shah, N., Tang, J. Is Homophily a Necessity for Graph Neural Networks? In ICLR, 2022.
[2] Luan, S., Hua, C., Xu, M., Lu, Q., Zhu, J., Chang, X.-W., Fu, J., Leskovec, J., Precup, D. When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. In NeurIPS, 2023.
[3] Luo, Y., Shi, L., Wu, X.-M. Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification. CoRR abs/2406.08993, 2024.
[4] Xu, K., Hu, W., Leskovec, J., Jegelka, S. How Powerful are Graph Neural Networks? In ICLR, 2019.
[5] Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., Prokhorenkova, L. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? In ICLR, 2023.
[6] Wang, Y., Xiang, S., Pan, C. Improving the homophily of heterophilic graphs for semi-supervised node classification. In ICME, 2023.
[7] Li, X., Zhu, R., Cheng, Y., Shan, C., Luo, S., Li, D., Qian, W. Finding Global Homophily in Graph Neural Networks When Meeting Heterophily. In ICML, pp. 13242-13256, 2022.
[8] Suresh, S., Budde, V., Neville, J., Li, P., Ma, J. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. In SIGKDD, 2021.
[9] Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., Yang, B. Geom-GCN: Geometric Graph Convolutional Networks. In ICLR, 2020.
[10] Tenenbaum, J. B., De Silva, V., Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
[11] Nickel, M., Kiela, D. Poincare embeddings for learning hierarchical representations. In NeurIPS, 2017.
[12] Ribeiro, L. F. R., Saverese, P. H. P., Figueiredo, D. R. struc2vec: Learning node representations from structural identity. In SIGKDD, 2017.
[13] Chien, E., Peng, J., Li, P., Milenkovic, O. Adaptive universal generalized pagerank graph neural network. In ICLR, 2021.
[14] Luan, S., Zhao, M., Hua, C., Chang, X.-W., Precup, D. Complete the missing half: Augmenting aggregation filtering with diversification for graph convolutional networks. In NeurIPS 2022 Workshop: New Frontiers in Graph Learning, 2022.
[15] Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., Chang, X.-W., Precup, D. Revisiting heterophily for graph neural networks. In NeurIPS, 2022.
[16] Shi, D., Han, A., Lin, L., Guo, Y., Wang, Z., Gao, J. Design Your Own Universe: A Physics-Informed Agnostic Method for Enhancing Graph Neural Networks. CoRR abs/2401.14580, 2024.
[17] Wang, J., Guo, Y., Yang, L., Wang, Y. Understanding Heterophily for Graph Neural Networks. CoRR abs/2401.09125, 2024.
[18] Lee, S. Y., Kim, S., Bu, F., Yoo, J., Tang, J., Shin, K. Feature Distribution on Graph Topology Mediates the Effect of Graph Convolution: Homophily Perspective. CoRR abs/2402.04621, 2024.
---
Rebuttal 3:
Title: reply to your concerns
Comment: Please let me know if our responses have addressed your concerns. Thank you! | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their valuable feedback. In this author rebuttal PDF, we provide diagrams and visualizations to help better understand our proposed definitions.
Figure 1 shows the definition of three types of homophily. Label homophily $h_L$ measures the label consistency along the graph topology, structural homophily $h_S$ measures the consistency of structural information within intra-class nodes, and feature homophily $h_F$ represents the feature dependencies along the graph topology. Each type of homophily represents a unique perspective towards the concept of graph homophily. In our paper, we disentangle the concept of graph homophily and investigate the synergy of these three types of homophily in Graph Neural Networks (GNNs) through theorems, simulations, and real-world experiments, providing a better understanding of graph homophily.
Figure 2 visualizes the impact of the three types of homophily in CSBM-3H: **1. Label homophily.** As label homophily ($h_L$) increases, nodes are more likely to connect with others sharing the same label, with high $h_L$ (Figure 2(c)) providing clearer class boundaries. Medium $h_L$ (Figure 2(b)) is less informative than very low $h_L$ (Figure 2(a)). **2. Structural homophily.** As shown in Figure 2(d), (e), and (f), when structural homophily ($h_S$) increases, neighbor distributions become more consistent, and high $h_S$ captures effective structural information. Besides, a higher $h_S$ makes a graph resemble planar and periodic graphs, suggesting stable structural information leads to regular patterns. **3. Feature homophily.** Figures 2(g), (h), and (i) show different levels of feature homophily ($h_F$) in the same graph topology. Medium positive $h_F$ (Figure 2(h)) causes boundary node features to resemble neighboring classes. Higher positive $h_F$ (Figure 2(i)) increases feature dependencies, especially near class boundaries. In real-world scenarios, positive $h_F$ causes entities to show dependencies with neighbors, like people influenced by friends. Negative $h_F$ makes nodes dissimilar from neighbors, creating distinct class boundaries and varied shades within the same class (Figure 2(g)). This "repulsive force" leads to reinforced original opinions after interactions, akin to online arguments.
Pdf: /pdf/4b55b37de3b130b17eff5a27d82ab9485fe9ae7a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Omnigrasp: Grasping Diverse Objects with Simulated Humanoids | Accept (poster) | Summary: The paper introduces Omnigrasp, a method for controlling a simulated humanoid with dexterous hands to grasp and manipulate a wide variety of objects along complex trajectories. The authors leverage a universal humanoid motion representation to improve training efficiency and scalability. This method achieves state-of-the-art success rates in object manipulation tasks and generalizes well to unseen objects and trajectories. Key contributions include a dexterous motion representation, the allowance for simple state and reward designs for training, and high success rates in diverse object manipulation scenarios.
Strengths: The challenging yet important topic regarding
1. a full-body dexterous grasping, which considers the physical constraints and unstability of the body;
2. omnidirectional movement after grasping, i.e., moving the object along any directions within a reachable range;
is really worth the exploration. The paper not only presents an effective algorithm that does not heavily rely on existing object trajectories but also conducts experiments on extensive objects to evaluate grasping success rates.
Weaknesses: While the paper marks a significant advancement in humanoid dexterous grasping, some inherent flaws in the setting hinder its seamless transfer to real-world scenarios. I.e., some state variables, such as the object mesh, pose, and velocity, cannot be accurately accessed in the real world. This discrepancy between simulation and reality poses a challenge for the practical deployment of the proposed method.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How do you plan to bridge the gap between simulation and real-world applications? Given that the input assumes access to the object pose, velocity, and mesh, which are impossible to obtain in the real world.
2. How do you envision the way your method could be adapted for, or at least benefit, downstream tasks? E.g., more fine-grained dexterous manipulation.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: 1. The approach is validated only in a simulated environment, which may not capture all real-world complexities and variabilities.
2. The method assumes accurate state estimation for object meshes, poses, and velocities, which is challenging to achieve in real-world scenarios.
3. Fine-grained dexterous manipulation is yet to be achieved by this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments! We have revised the paper to provide more discussion for sim-to-real transfer and downstream tasks. To address your concerns and questions:
---
> **Transfer to Real Humanoid**
We acknowledge that Omnigrasp in its current form could not be applied to the real world. Its potential for real-world deployment lies in the scalable learning framework (first learn motion representation and then train grasping and object trajectory following). To deploy in the real-world and avoid using privileged information, the popular practice in sim-to-real is to distill a teacher policy into a student policy that does not use privileged information [1, 2, 3]. We imagine an input space similar to [3] (8 point bounding-box) could be used in the real-world for grasping. The mesh information could be replaced with vision or point cloud [1]. Similar to the transfer observed from the motion imitation task to real humanoid motion imitation [4, 5, 6], we believe that a framework similar to Omnigrasp could be deployed to the real world.
> **More Downstream Task Like Fine-grained Manipulation**
We believe that if we train *specifically* for the object manipulation task for one object, our framework is capable of learning fine-grained manipulation. Our early results indicate that we can *overfit* to fine-grained object movement and handovers using the same motion latent space. This indicates that the motor skills learned in PULSE-X could support fine-grained manipulation. Once we add more diverse objects and trajectories, the learning problem becomes significantly harder and we observe the policy struggle with more precise trajectory tracking. While we focus on scaling the grasping task using simulated humanoids, we believe that tuning the reward for the fine-grained manipulation task on a smaller number of objects can lead to promising results. How to learn a general policy that can do both is an interesting yet challenging future direction.
**References**
> [1] DexPoint: Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation
[2] Robot parkour learning
[3] DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics
[4] Expressive whole-body control for humanoid robots
[5] HumanPlus: Humanoid Shadowing and Imitation from Humans
[6] Learning human-to-humanoid real-time whole-body teleoperation
---
Rebuttal Comment 1.1:
Title: Keep my original recommendation
Comment: Despite its current limitations in transferability and scalability, I believe this work can provide a foundation for future humanoid grasp learning methods. I will keep my original decision.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments and response!
Comment: The authors appreciate your suggestions and Accept recommendation. Please let us know if you have any additional questions! | Summary: The paper introduces a method for controlling a simulated humanoid robot to grasp and move objects along a trajectory with the use of a dex hand, This approach enables the robot to handle diverse objects with diverse trajectories. The key contribution is a humanoid motion representation that enhances training efficiency and human-like manipulation skills and applies it to grasping tasks. The method achieves a reasonable success rate in completing object trajectories and generalizes well to unseen objects.
Strengths: * This work extends previous research on PHC with dexterous hand capabilities, enabling whole-body manipulation in simulation with a high success rate. The results, including supplemental videos, demonstrate human-like motion for object grasping and trajectory following.
* Experiments show that the proposed controller exhibits reasonable robustness and generalization ability across objects of different scales.
* Extensive ablation studies validate the effectiveness of various components, providing plausible analysis and explanations for comparisons, such as why experiments without object shape still achieve reasonable grasping performance.
* The paper is well-written, easy to follow, and provides sufficient technical details, including supplemental code, for the community to reproduce the results.
Weaknesses: * Despite extensive ablation studies, the work lacks some important baselines, such as fully end-to-end RL with similar compute (R1 in ablation), considering that training PHC-X and PULSE-X takes half a month. Baselines utilizing human motion prior data in different ways, such as AMP or ASE, are also missing.
* More details about the dexterous hand should be included, given its significance over previous work on PHC. Comparisons with experiments using PHC but not PHC-X, such as using PHC for body control and raw action space for the hand, are needed. The ablation study compares with motion prior to training without Dex-AMASS, but it’s unclear how the hand joints are controlled, and more details are necessary.
* [Minor - Nothing significant]Though the authors claim the work can be extended to real robots in supplementary material, it seems unlikely due to the low simulation frequency (leading to low simulation fidelity), unrealistic robot methodology design, and the use of privileged information.
Technical Quality: 2
Clarity: 3
Questions for Authors: * Considering most of the human body motions and hand motions are randomly paired from different datasets, why not train the model separately (PHC for body motion and another model for hand motion)? What additional benefits does modeling them together provide?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors include a reasonable discussion of the limitation and no obvious potential negative societal impact needs to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful suggestions and feedback. We have revised the paper to provide additional baselines (AMP and PHC), add details about dexterous hands, and discuss decoupled body and hand prior. To address your questions:
---
> **Additional Baselines**
In Table 2 of the global PDF, Row 2, we train the policy without PULSE-X to collect $10^{10}$ samples for 1 month (Omnigrasp uses $10^{9}$) and observe its performance still significantly lags behind Omnigrasp. Training with only RL (even with AMP) also leads to non-human-like motion (see supplement videos). Row 3 reports training our method with AMP but not PULSE-X, which does not lead to a high success rate. Row 4 reports results from Braun et al. [1], which uses ASE-style latent space **trained specifically for the grasping task** on GRAB data. Compared to Braun et al., we achieve a much higher success rate in terms of grasping and trajectory following and can follow complex trajectories. In comparison to ASE, PULSE-X's latent space is trained on AMASS and covers much broader types of human motion. ASE's discriminative latent space, while effective when learning from specialized and curated datasets, struggles to learn from large-scale unstructured datasets such as AMASS, as shown in [2].
> **Details about the Dexterous Hand**
We will add more details about how the hands are controlled in the revised manuscript. The hands are treated the same as the rest of the body (e.g. toe or wrist) and actuated using PD controllers (Section 4.1).
Using PHC for body control and raw action space for the hand is an excellent suggestion. However, this approach requires we learn a policy to output kinematic poses as "actions" for PHC to imitate. As observed in [2], kinematic motion is a poor sampling space for RL due to its high dimension and lack of physical constraints (e.g. a small change in root position can lead to a large jump in motion). Thus, prior art that uses kinematic motion as the action space (e.g. kinematic policy [3, 4]) uses supervised learning to train the motion generator instead of RL. Supervised learning would require paired full-body human grasping data, which is scarce and limited in diversity. One of the main advantages of Omnigrasp is its ability to learn grasping policy **without paired full-body grasping data**, enabling it to scale to many objects and diverse trajectories.
Further, even if a policy could output ground-truth kinematic pose for PHC, small errors in imitation can lead to the hand missing the objects. To demonstrate this point, we use **ground truth** MoCap as input to a pretrained PHC-X policy (\~30mm imitation error on average) for grasping, using sequences from GRAB. The result from Table 2 (global PDF) Row 1 indicates that the accuracy of a trained imitator does not support object grasping. To use PHC for the grasping task, we will need to fine-tune PHC with object awareness and pair it with a strong kinematic motion generator. Such an approach has been explored for box loco-manipulation [6] without hands, but it only supports moving boxes for now.
> **Training with Dex-AMASS**
We apologize for the confusion. When comparing training with or without Dex-AMASS for PULSE-X, the only difference is the **training data**. In both cases, the hand joints are controlled using PD controllers to output PD targets, but one with regular AMASS and one with our Dex-AMASS as training data. We will further clarify this.
> **Extending to Real Robots**
We will revise the sentence to "While the state has access to privileged information and the current humanoid has no real-world counterpart, the overall system design methodology has the potential to be transferred to a real humanoid robot, similar to how the motion imitation task is applied in recent humanoid work [5, 7, 8]." We believe that the framework of first learning a universal motion representation, then learning grasping policy in simulation, followed by conducting sim-to-real modifications (e.g. domain randomization, distilling into a vision-based policy), **has the potential** to be applied to real-world humanoids.
> **Train Separate Body and Hand Models**
As mentioned in Section 6, while we utilize a simple yet effective unified motion latent space, separate motion representation for hands and body could lead to further improvements. We completely agree that training a separate model *could* be beneficial. Braun et al [1] use decoupled body and hand prior, and achieve a lower success rate. We hypothesize that since the hand is tethered to the body and performs actions based on different wrist movements (which affects gravity), training a hand-only model that performs well when combined with the body is non-trivial. Since we observe accurate hand-tracking results when training a motion imitator jointly for hands and body, we proceed with the joint latent space design and find it is sufficient for achieving good grasping results. We are actively exploring using decoupled hand and body latent space, though it has not yet shown real benefit.
**References**
> [1] Physically plausible full-body hand-object interaction synthesis
[2] Universal humanoid motion representations for physics-based control
[3] Learning predict-and-simulate policies from unorganized human motion data
[4] Dynamics-regulated kinematic policy for egocentric pose estimation
[5] Learning human-to-humanoid real-time whole-body teleoperation
[6] Hierarchical planning and control for box loco-manipulation
[7] Expressive whole-body control for humanoid robots
[8] HumanPlus: Humanoid Shadowing and Imitation from Humans
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the concerns. I'm keeping my original evaluation.
One additional question about separately controlling hand and body, though it might require output kinematic pose using PHC, it seems using PULSE (which if I understand it correctly takes latent command as input) + separate controller for hand should address most of the problems authors raised for using kinematic poses. Is there any specific challenge or problem with this?
---
Rebuttal 2:
Title: Follow up to Comment by Reviewer gXSs
Comment: Thank you for the discussion!
PULSE is not compatible with the body of the SMPL-X humanoid, since it is trained using the SMPL humanoid. As the body shape for SMPL and SMPL-X is not the same, SMPL-X and SMPL humanoids have slightly different bone lengths and directions. Since the GRAB dataset we are using is in SMPL-X format, we initiated this work to support the SMPL-X humanoid. As more and more dataset is captured using SMPL-X instead of SMPL-H, we hope our pipelines can be future-proof and support SMPL-X based datasets.
We would also like to note that raw hand actions are problematic as the controller can use non-human-like strategies in grasping (see Supplement Site, Training Without PULSE-X). Using raw actions leads to hands being distorted due to their high degree-of-freedom and unbounded exploration. To provide motion prior to the hand's motion, either AMP or PULSE-like motion prior will be needed. Using PULSE for the body and then the raw-action space for the hand plus some form of hand prior would also be quite close to our method in terms of methodology and compute overhead (for training PHC and PULSE), and would support our story that using a strong motion prior one can enable learning diverse grasping and object trajectory following. We are actively exploring using separate hand and body priors, though it does not seem to be the bottleneck yet.
Please feel free to ask us any additional questions! | Summary: This paper proposes an approach to learning a humanoid controller that can manipulate objects to follow trajectories. It first assembles a dataset of human bodies and hands motions, and learns a control policy from the state transitions in the dataset. Then, they distill this policy using a VAE to obtain an action decoder that models realistic human action distributions. After that, they use this distribution as the action space to learn an RL policy whose reward is based on object trajectory following. With this action space, the neural network policy can output realistic actions and avoid exploration difficulties. The authors demonstrate several impressive, yet natural and physically plausible motions for a simulated humanoid to grasp and move objects according to trajectories.
Strengths: The results are generally good, and the motion is relatively natural and physically plausible.
The demonstration of extending this approach to high-dimensional control is important.
The authors also provide several insightful observations, such as how to assemble the dataset to extend PULSE to human hand motion and how to achieve diverse grasping behaviors. These contributions are all important to the community.
Weaknesses: While I definitely think this paper is good, I believe there are several critical aspects that should be more carefully evaluated, either qualitatively or quantitatively. I will provide the summary comments in this section and write the specific questions that I would like answers to during the discussion phase in the next section.
First, I’m generally curious about the robustness of the proposed system. The object trajectory following policy takes a few inputs without noise from the simulator (joint positions, object latent code, proprioception). However, in reality, these quantities are far from perfect.
Second, this approach assumes the humanoid is similar enough to the mocap data. I agree this is a reasonable assumption. However, in reality, humanoid robots and their hands are usually not perfect replicas of humans. How similar do their morphologies need to be?
Regarding the learning approach, it first learns a policy to output actions given the human sequence data. Then it distills this policy network using VAE to learn the distributions as the action space. There should be an alternative approach that can serve as a simple baseline (details below). This is not discussed (or I missed the reference?).
The current reward contains several terms with different reward weights and hyper-parameters. I would not say this is a simple reward, especially without a concrete comparison to previous work.
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding the robustness of the system: How would the policy perform in noisy situations? For example, when the joint positions are not accurate (real robots will have imperfect joint encoders). And how accurate does the object mesh need to be?
How important is the morphology similarity between the humanoid and the hand? For example, do they need to have the same number of joints or links? I also observed even in the current system, it seems the simulated humanoid does not perfectly match human kinematics. Is this correct? If so, how do you bridge this gap?
Regarding the simple baseline alternatives:
* Instead of learning actions from human trajectory sequences between two timesteps, why not directly using a VAE to reconstruct human poses? This VAE will directly learning a distribution of human joint positions. Then in the RL policy training, you can map the policy output to natural human joint positions using this VAE.
* Even when one wants to learn the “actions” instead of “joint positions”, one could use a encoder-decoder structure (either VAE or vanilla auto-encoder) as the policy backbone and directly learn in the PULSE-X phase, instead of separating it into two stages.
I’m curious if the authors tried these alternatives before? Or does this comparison exist in the literature? Are they performing not well? It would be much helpful to analyze this problem or provide references in the text.
A more comprehensive discussion on the failure cases would be helpful. You show several videos of the failure cases, which I think is very helpful. But why do they fail? Is it because the objects themselves are quite difficult? Regarding this point, it would also be much helpful to have a per-object accuracy analysis.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors describe the limitations mainly as not performing dexterous in-hand manipulation. The authors could consider using recent learning-based dexterous in-hand manipulation work as the “mocap dataset” in your formulation and learn action distribution using it.
I think there is another limitation, which is that the current formulation does not demonstrate capability when the object needs to interact with the environment.
While I acknowledge that the above two points are out of scope and will not affect my assessment at all, they might be interesting future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful feedback and comments. We have revised the paper to include robustness tests, a discussion about kinematic latent space, and failure case analysis. To address your concerns:
---
> **Robustness**
In Table 1 of the global PDF, we add uniform random noise [-0.01, 0.01] to both task observation $s_t^g$ (positions, object latent codes, etc.) and proprioception $s_t^p$. A similar scale (0.01) of random noise is used in sim-to-real RL to tackle noisy input in real-world humanoids [1]. We see that Omnigrasp is relatively robust to input noise, even though it has not been trained with noisy input. Performance drop is more prominent in the acceleration $E_{\text{acc}}$ and velocity $E_{\text{vel}}$ metrics. Adding noise during training can further improve robustness. We do not claim that Omnigrasp is currently ready for real-world deployment, but we believe that a similar system design **plus sim-to-real modifications** (e.g. domain randomization, distilling into a vision-based policy) has such potential.
> **Morphology**
The AMASS [2] dataset has been successfully used in real-world humanoid control [1, 3, 4]. Retargeting techniques can be used to transfer human motion to the humanoid, even when they do not have the same morphology or joint numbers. Similarly, human hand motion in Mano [6] format can be transferred to robotic hands [7, 8].
One of the main strengths of Omnigrasp is its independence from paired full-body grasping and object trajectory data, which is expensive to capture and therefore only available in small-scale datasets. Since the retargeting process could introduce errors that cause human-object interaction to be imprecise, retargeted interaction data can be hard to use for methods that rely on demonstrations. Our two-stage process (first obtain motion prior and then learn grasping policy) can leverage imperfectly retargeted motion, since as long as the motion is human-like, we can acquire motor skills by imitating it.
> **Baseline Alternative**
Excellent advice! Such a general-purpose kinematic latent space has been used in physics-based control for pose estimation [10] and animation [11], though few have been extended to include dexterous hands. These latent spaces, like HuMoR [9], model motion transition using an encoder $q_\phi (z_t| x_t, x_{t-1})$ and decoder $p_\theta(x_t | z_t, x_{t-1})$ where $x_t$ is the pose at time step t and $z_t$ is the latent code. $q_\phi$ and $p_\theta$ are trained using supervised learning. The issue with applying such a latent space to simulated humanoid control is twofold:
- The output $x_t$ from the VAE model, while representing natural human motion, does not model the PD-target (action) space required to maintain balance. This is shown in prior art [10, 11], where an additional motion imitator is still needed to actuate the humanoid by imitating $x_t$ instead of using $x_t$ as policy output (PD-target).
- $q_\phi$ and $p_\theta$ are optimized using MoCap data, whose $x_t$ values are computed using ground truth motion and finite difference (for velocities). As a result, $q_\phi$ and $p_\theta$ handle noisy humanoid states from simulation poorly. Thus, [10] runs the kinematic latent space in an open-loop auto-regressive fashion without feedback from physics simulation (e.g. using $x_{t-1}$ from the previous time step's output rather than from simulation). The lack of feedback from physics simulation leads to floating and unnatural artifacts [10], and the imitator heavily relies on residual force control [12] to maintain stability.
PULSE directly models the action distribution instead of the kinematic pose and does not need a motion imitator during inference. Directly learning the latent space without distillation is provided as an ablation in the PULSE paper (Section C.2 Table 6), and it shows that training using RL does not converge to good performance. Random sampling for the variational bottleneck together with random sampling for RL leads to noisy gradients, which hinders policy learning.
We will add additional discussion to elaborate on these points.
> **Reward Complexity**
We will clarify this in the text. The “simple reward” here refers to not needing paired full-body-and-hand MoCap data in the reward, which increases complexity. Prior art often involves graph-based [13, 14] or style rewards [5] that depend on paired data.
> **Failure Analysis**
We categorize the failures into two types: failure due to grasping and due to dropping the object after during transport. The “restabilizing the object in hand” during transport behavior might require further dexterity and training rewards to master. Appendix Section C.3 provides a per-object breakdown of the GRAB-Goal split. We can see that toothpaste and binoculars have the lowest trajectory following success rates (80.9% and 90.5%). In our experience, objects that can roll (such as toothpaste) and large objects (such as binoculars) are more difficult.
**Reference**
> [1] Learning human-to-humanoid real-time whole-body teleoperation
[2] AMASS: Archive of motion capture as surface shapes
[3] Expressive whole-body control for humanoid robots
[4] HumanPlus: Humanoid Shadowing and Imitation from Humans
[5] Physically plausible full-body hand-object interaction synthesis
[6] Embodied hands: Modeling and capturing hands and bodies together
[7] Kinematic Motion Retargeting for Contact-Rich Anthropomorphic Manipulations
[8] Task-oriented hand motion retargeting for dexterous manipulation imitation
[9] Humor: 3d human motion model for robust pose estimation
[10] Learning human dynamics in autonomous driving scenarios
[11] Learning Physically Simulated Tennis Skills from Broadcast Videos
[12] Residual force control for agile human behavior imitation and extended motion synthesis
[13] Simulation and retargeting of complex multi-character interactions
[14] Physhoi: Physics-based imitation of dynamic human-object interaction | null | null | Rebuttal 1:
Rebuttal: # General Response
The authors would like to thank the reviewers for their time and constructive feedback. We hope that our responses clarify and address their concerns. We are glad that the reviewers find our work a "significant advance" (z5bR), our results "achieve a high success rate" (gXSs, z5bR), and our motion "impressive, natural, and human-like" (rKeN, gXSs). Here, we briefly address some common questions.
> **Transfer to the Real-world**
We fully acknowledge that the current system cannot be transferred to the real world without modification. In this work, we focus on using *simulated humanoids* to grasp diverse objects (>1200) and follow diverse trajectories, a capability that has yet to be attained in simulation with humanoids. We hope that we are taking a step towards real humanoid capabilities. The privileged state used in Omnigrasp could be replaced with values easier to access in the real world (point cloud, pose estimation, etc.), and the humanoid we use can be replaced with a humanoid robot [1]. By first learning a policy that has access to all the available information that can be provided, we hope that we can enable new capabilities for simulated humanoids and design systems that can be modified for real-world deployment (via sim-to-real transfer, teacher-student distillation, etc.)
> **Provided Global PDF**
In the global PDF, we provide two tables containing additional experiments and baselines. Table 1 demonstrates the robustness of a pretrained Omnigrasp policy under artificial noisy conditions. While this experiment will not show Omnigrasp's performance when deployed in the real world, it demonstrates the system's potential to undergo sim-to-real modification for deployment. Table 2 provides additional baselines including PHC, AMP, and training with only RL.
**References**
> [1] Learning human-to-humanoid real-time whole-body teleoperation
Pdf: /pdf/6c7ed7028b19eac4d5a40711b34a69e8d06d23fb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Abductive Reasoning in Logical Credal Networks | Accept (poster) | Summary: This paper addresses abductive reasoning tasks such as generating MAP and Marginal MAP (MMAP) explanations in Logical Credal Networks (LCNs). Given that LCNs encode sets of distributions over their interpretations, a complete or partial explanation of the evidence may correspond to multiple distributions. Thus, the authors define the maximin/maximax MAP and MMAP tasks for LCNs as finding complete or partial MAP assignments with maximum lower/upper probability given the evidence. They propose several search algorithms that combine depth-first search, limited-discrepancy search, or simulated annealing with exact evaluations of MAP assignments using marginal inference for LCNs. Additionally, they develop an approximate message-passing scheme and extend limited discrepancy search and simulated annealing to use approximate evaluations of MAP assignments during the search. Experiments show that the approximation schemes which they have proposed can scale to much larger problems compared to search methods.
Strengths: 1. The research is very detailed, providing an excellent formalization of abductive reasoning in LCNs. This formalization addresses a significant gap in previous LCN research, where solving MAP was challenging, and developing corresponding algorithms is not a trivial extension. The authors list several algorithms for solving this and successfully implement and compare them in detail.
2. The approximate algorithms proposed by the authors significantly improve both the solving time and the scale of solvable problems compared to previous search-based methods.
Weaknesses: The experimental setup is relatively simple, lacking more practical industrial examples and relying more on basic toy experiments. Additionally, there is no comparison with other methods.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In the initial LCN papers, the authors compared LCN with ProbLog and MLN. Can your method be compared with these? For example, ProbLog is a direct derivative of logic programming, which can perform basic reasoning, including deductive reasoning as well as abductive reasoning.
2. In the experiments, different approximate algorithms show significant differences in solving time, despite having almost equivalent time complexity. How can this be explained?
3. In line 80, should it be "less likely"?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We provide responses to your questions and concerns below.
We would like to emphasize that our paper provides the first study dedicated to MAP and MMAP inference in LCNs and to the best of our knowledge there are no other baseline algorithms for solving these tasks in LCNs.
Q1: Previous work on LCNs showed that the Problog/MLN formalisms cannot really be used to model the same benchmark problems that LCNs can model (for instance, Problog/MLN do not allow conditional probability bounds). Therefore, a direct comparison with these methods is not really possible on the benchmark problems we consider in our work.
Q2: The discrepancies between LDS/SA and ALDS/ASA in terms of running times can be explained by the slightly different search spaces explored by the two classes of algorithms. LDS/ALDS explore up to 2^d nodes, where d is the discrepancy value, and every single assignment is evaluated from scratch. In contrast SA/ASA are limited to M=30 flips (i.e., assignments) in our experiments, but some of these assignments may be generated multiple times and, in this case, SA and ASA use caching, namely if the current MAP assignment was solved before, they retrieve its value from the cache, thus avoiding its re-evaluation. Therefore, SA/ASA are more efficient than LDS/ALDS and we demonstrate this in our experiments. Furthermore, the underlying ipopt solver we use to solve the non-linear programs corresponding to the conditioned subproblems often suffers from numerical precision issues and for some subproblems it may take longer to solve than for others, and consequently the order in which these conditioned subproblems are considers may impact the overall running time. We will discuss these aspects in more detail in the paper.
Q3: Yes, it was supposed to be “less likely” – we will correct the typo, thanks for the catch.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. I appreciate the clarification, especially regarding the runtime, which has addressed my concerns. However, I still believe that comparing your method with approaches outside of LCN, even if those methods use the most naive or brute-force algorithms, is necessary. Nonetheless, I consider this paper to be solid work and will maintain my current rating. | Summary: This paper proposes how to solve MAP and Marginal MAP queries for Logical Credal Networks (LCNs). LCNs are a class of graphical probabilistic logic models with the expressiveness to represent cycles as well as marginal and conditional probability bounds on logical formulae. The authors first present three search algorithms for exactly computing MAP/MMAP bounds on queries to LCNs. Then, because the MAP/MMAP problem for LCNs is NP-Hard, the authors present three approximation methods that offer considerable speedup at only a small cost to accuracy. The supplemental material contains formal proofs and extensive experiment details.
Strengths: The authors do a good job of comprehensively studying the problem of MAP/MMAP inference for LCNs. Consequently, this paper is relevant and useful for researchers and practitioners interested in graphical probabilistic logic models. In particular:
* The authors present a variety of different algorithms for MAP/MMAP on LCNs and theoretically prove their correctness and complexity.
* The approximation algorithms presented offer considerable speedup while achieving competitive performance with the exact solutions.
* The six algorithms (three exact + three approximate) offer good coverage of what a user of LCNs may consider.
Weaknesses: My primary critique is on the exposition and motivation.
* It would help to better motivate the strengths of LCNs by providing examples/references of why cycles and marginal+conditional probabilities show up. This would be stronger than claiming "... which may be important in many realistic use cases".
* Having more examples and figures of LCNs would be helpful, especially in Section 2.1.
* Emphasizing the NP-Hardness of MAP/MMAP for LCNs would help better motivate the need for search techniques and approximation algorithms.
* Section 2.2 is a bit dense. Moreover, it appears that Equation (8) is a vector-valued objective, which does not make sense to me. Overall, this section would benefit from a more relaxed exposition pace --- possibly in a future manuscript version.
The experiments would also benefit from having the main takeaways more explicitly stated.
* It would help to have captions that succinctly explain the main points, e.g., for Table 1: that AMAP does very well compared to DFS, LDS(3), and SA.
* It would be useful to supplement Figure 2 with a plot of the optimality gaps of the approximations, rather than simply the "wins".
Others comments:
* At present, it appears that the practical algorithms are restricted to fairly small LCNs.
* Three exact algorithms and three approximation schemes in one paper are quite a lot. It would help the reader to see the author's recommendations and discussions of their various trade-offs.
* The part on "Application to Facuality in Large Language Models" is a bit dense and sudden. If the authors deem it within the scope of the paper, it would help to supplement this with some experiments.
Minor Comments:
* Section 3.2: It would be helpful to explicitly list Algorithm 1, 2, and 3 in the paragraph headers, e.g., "Algorithm 1: Depth-first Search"
* More descriptive names for theorem labels would be helpful, e.g. "Theorem 1 (Complexity of Depth-first Search)"
Technical Quality: 4
Clarity: 2
Questions for Authors: * The space complexities appear quite extreme. Could the authors please comment on whether this is a fundamental drawback, and if so, how much better might one (in practice) reasonably expect to do?
* Figure 3 in the paper looks different from Figure 6 in the supplemental material. Could the authors please address why this might be the case?
I have saved my harshest critiques for last. I am an outsider to the graphical models community, so please forgive my ignorance. I am willing to revise my assessment if the authors could please expand on the following points that a general ML audience might have:
* Why care about LCNs?
* What are real-world cases of people using LCNs or similar models?
* What's an example of something that LCNs can capture that other graphical models can't?
* In fact, why care about graphical models at all when deep learning is everywhere? (Some comments about explainability + interpretability should be well-received)
* If the existing algorithms for LCNs are so expensive, what would be some "realistic" scenarios for which they're relevant?
Confidence: 2
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: The authors sufficiently discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We provide responses to your questions and concerns below.
We will expand the background section to include more examples of LCNs. For now, we refer the reader to the previous work on LCNs which we cite in the paper. We agree that Section 2.2 is quite dense at the moment and we will use additional space to add more details as well as a small running example. The hardness of inference in LCNs is not actually known yet. We suspect it belongs to the NP^NP^PP-hard class but proving the result formally is an open problem. We will include a short discussion to emphasize this issue. Finally, we will follow up on your suggestion and summarize the findings of our empirical evaluation in a separate subsection.
Q1: Regarding space complexity, yes, in the worst case we need to represent in memory a probability distribution over an exponentially large number of interpretations. This is a serious limitation, especially for exact algorithms. For example, in practice, our approximate scheme AMAP can handle LCN instances whose factor graph contains factor nodes involving up to 10-12 propositions.
Q2: Figure 3 in the main paper plots results with algorithm ALDS and is the same as Figure 5 in supplementary material (although there is a typo in its caption), while Figure 6 in supplementary material contains results with algorithm LDS.
Q4: Previous work on LCNs has already showcased several potential applications of LCNs including one from the chemistry domain. In this paper, we illustrate a potential application to factuality assessment for LLMs. Furthermore, [Cozman et al, 2024] has shown recently that LCNs can be used to model and solve causal reasoning tasks such as estimating the causal effect of an intervention under partial identifiability conditions.
Q3&Q5: LCNs allow specifying conditional probability bounds on logic formulae and allow directed cycles. Previous work on LCNs demonstrated that this is very useful especially when we need to combine multiple sources of imprecise knowledge in a single model. In contrast, graphical models like Bayes nets do not allow cycles nor bounds on probability values, credal networks allow probability bounds but require acyclicity. Probabilistic logics like Problog and MLNs allow undirected cycles but require point probability values. Therefore, LCNs can be viewed as a generalization of these previous models.
Q6: Graphical models are inherently interpretable models and can be used to provide explanations in a principled manner. They have been studied extensively over the past decades and there is substantial literature illustrating non-trivial applications to real-world situations.
Q7: We show that exact inference for LCNs is expensive while approximation schemes are far more scalable. These approximate algorithms are clearly applicable to the potential realistic applications presented in previous papers on LCNs (please see also our answer to Q4).
We thank the reviewer again for their detailed feedback and thoughts. If the reviewer believes we have addressed some of their concerns, we request them to consider increasing their score.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for the detailed response. I am still skeptical of LCNs due to their currently limited adoption, but I am warming up to their potential usefulness. I have increased my score. | Summary: This is a paper about inference on logical credal networks, a class of graphical models that cope with interval-valued probabilistic statements on propositional logic formulae. The novelty of the paper is that it focuses on marginal MAP inference (and hence also MAP as a special case). Exact and approximate algorithms based on search strategies are proposed and empirically tested.
Strengths: The main contribution is an approximate procedure based on recent work on marginal inference in the same class of models. This procedure seems to perform well on models for which exact algorithms are too slow. The extension from marginal inference to marginal MAP is non-trivial.
The experiments are extensive and the results convincing.
Weaknesses: LCNs are not very popular models, at least for the moment, and their potential for applications to real problems is not very clear.
Something similar could be said, more specifically, to the need for tools for abductive reasoning in such models.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors do not report results on the hardness of their inference tasks. This is probably obvious, right? Yet, some comments about that would help. It would be also interesting to compare a brute-force wrt the exact methods.
- With credal models, there is a difference between the "conditional" and the "joint" versions of an inference task for the simple reason that the two models are proportional through the probability of the evidence, which is not constant in a credal setup. I believe that the authors consider a "joint" version, but this point should be made clearer.
- I don't understand how the ground-truth values in the experiments are obtained.
- The results show that the topology of the networks seems not to affect the execution time too much. Some comments about that would be valuable, as the situation is very different in other graphical models.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We provide responses to your questions below.
Q1: The hardness of MAP/MMAP inference in LCN isn’t actually known yet. We suspect it is an NP^NP^PP-hard task but proving this result formally is an open problem. We will include a discussion of this issue. The DFS algorithm described in the paper (Algorithm 1) is a brute-force algorithm (i.e., the MAP assignments are enumerated exhaustively and each one is evaluated exactly) so we do include results with such a brute-force approach.
Q2: Yes, the MAP/MMAP tasks defined in this paper can be viewed as “joint” inference tasks. They find a truth assignment to the MAP propositions and the evidence propositions can be viewed as part of that assignment. We will clarify this in the paper.
Q3: We are not clear what the reviewer means by “ground-truth values”. If they refer to the exact MAP/MMAP solutions, we do obtain them with the DFS algorithm which is an exact algorithm but only on the smallest problem instances due to the scalability issues discussed in the paper. However, if they refer to the probability values in the problem instances considered, we actually generated those values randomly as described in the experimental section (and also in the supplementary material).
Q4: That is correct. The algorithms proposed in this paper do not exploit the graph structure as commonly done in variational inference in graphical models. Understanding the factorization of the LCN is an open research problem. Recently, [Cozman et al, 2024] investigated Markov conditions and factorization in LCNs which could be used in principle to develop more efficient algorithms for LCNs. This ambitious endeavour is also part of our research agenda.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comments. All my questions/doubts have been clarified, and I am happy to confirm my positive opinion about that paper. | Summary: The paper presents an approach for MAP and marginal MAP in logical credal networks using search based algorithms. Compared to PGMs, MAP and marginal MAP is harder in LCNs since a MAP assignment could correspond to one of several distributions and therefore, to even evaluate the MAP, we need to perform marginalization which is a hard task. The exact algorithms are developed using DFS, limited discrepancy search and simulated annealing. However, since these are infeasible in practice, approximate methods are developed based on a marginal inference method for LCNs. The main idea is to compute lower and upper MAP (or marginal MAP) probabilities
Experiments are performed on synthetic LCNs and those generated from Bayesian nets. Further, an application related to testing LLMs is presented. Specifically, the idea is to connect atoms from the generated text to a source such as wikipedia and compute facility as a the MAP score. Scalability results are shown for the LCNs used in this task for different variants of the proposed methods.
Strengths: - A novel class of inference queries added to LCNs can improve the applicability of LCNs in different applications.
- The paper develops a comprehensive suite of MAP and MMAP exact and approximate inference algorithms for LCNs
- Results show that the proposed algorithms can scale up and find approximate MAP/MMAP solutions
Weaknesses: In terms of significance of the proposed approaches, the results mainly show scalability of the approximate methods and their ability to find MAP/MMAP solutions. However, the actual application of MAP/MMAP seems missing. For example, in the LLM application the results do not really tell us how useful the MAP/MMAP solution was compared to other methods other than that the MAP/MMAP solution could be found as the LCN becomes larger. In general, I feel the proposed approaches would be much more significant if the results showed that if the proposed solution improved over other approaches that could be used to solve the same problem.
Technical Quality: 3
Clarity: 2
Questions for Authors: Are there specific use cases where the MAP or MMAP solutions for LCNs can be compared with other competing methods? In general, what would be the advantages of MAP/MMAP queries for LCNs as compared to other probabilistic approaches.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are not explicitly mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We provide responses to your questions below.
To the best of our knowledge, this paper provides the first study dedicated to MAP and MMAP inference in LCNs and therefore, there are no other baseline algorithms for solving MAP/MMAP queries in LCNs to compare with. In general, LCNs offer several advantages over existing models, namely they allow specifying probability bounds on logic formulae, do not require acyclicity, and are far more flexible to specify logic formulae compared with existing logic programming approaches.
MAP/MMAP queries for LCNs provide a principled way to generate most probable (partial) explanations for these kinds of models. For example, inferring the code in the uncertain Mastermind puzzles introduced in a previous paper on LCNs can be solved as a MMAP query and as shown previously the most effective way to solve it is by modelling the problem as an LCN rather than using existing approaches based on Bayes nets, Problog or MLN.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I think the issue of LCNs being able to solve problems better than other statistical relational models is good. I do think that being the first work to add MAP and MMAP (both of which are important tasks in graphical models) to LCNs is a valuable contribution. The technical components seem solid in the work in terms of improving scalability of MAP/MMAP queries in LCNs, but I was still a bit unsure about the quality of the approximation algorithms relative to some other approach, particularly since I think it is hard to have theoretical guarantees on the approximation. That would have made the paper much stronger in terms of the significance I feel. In summary though, I feel this seems like a solid enough work with some weaknesses. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their valuable feedback and thoughtful suggestions.
Since several reviewers have asked for more justification for the formalism, we would like to emphasize that LCNs offer a language to deal with many AI settings where probabilities and constraints interact and that they are meant to go beyond classical graphical models such as Bayesian networks and the like. For instance, LCNs offer a path to dealing with non-identifiability in causal reasoning as was recently shown by [Cozman et al, 2024]. Furthermore, LCNs can offer a path to uncertainty quantification where it is important to differentiate between epistemic and aleatoric uncertainties, something that is done often with probability bounds [Hullermeier et al, 2022]. Unlike existing graphical models (e.g, Bayes nets) and probabilistic logics (e.g., Problog, MLN), LCNs are more expressive allowing conditional probability bounds on logic formulae, do not require acyclicity restrictions, and in general are more flexible to specify logic formulae compared with existing formalisms. Previous work on LCNs has already showcased several applications that can be solved more efficiently when modeled as LCNs and where existing approaches based on graphical models like Bayes nets or on probabilistic logics like Problog and MLN fail. These applications include uncertain Mastermind puzzles, credit card fraud detection with imprecise domain expert knowledge, as well as an application from the chemistry domain involving a prediction task using imprecise domain expert knowledge and molecular fingerprinting data [Marinescu et al, 2022, 2023]. Clearly, identifying additional real-world applications for LCNs is an open problem and is also part of our ongoing research agenda.
References:
[Cozman et al, 2024] F. Cozman, R. Marinescu, J. Lee, A. Gray, R. Riegel, D. Bhattacharjya. Markov Conditions and Factorications in Logical Credal Networks. In International Journal of Approximate Reasoning (IJAR), vol. 172, 2024.
[Hullermeier et al, 2022] E. Hullermeier, S. Destercke, and M. Shaker. Quantification of Credal Uncertainty in Machine Learning: A Critical Analysis and Empirical Comparison. Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI), 2022. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Logical Credal Networks (LCNs) are a probabilistic logic framework designed for representing and reasoning with imprecise knowledge. While previous research on LCNs has focused on marginal inference, there has been a lack of exploration in abductive reasoning within this context. This paper addresses this gap by investigating abductive reasoning in LCNs using Maximum A Posteriori (MAP) and Marginal MAP (MMAP) queries. To solve MAP and MMAP tasks, the authors propose techniques based on Depth-First Search (DFS), Limited Discrepancy Search (LDS), and Simulated Annealing (SA). Additionally, to improve time complexity, they introduce a method that utilizes a message-passing approximation scheme, allowing for more efficient and scalable solutions.
Strengths: - The paper introduces a novel method for computing MAP and MMAP in Logical Credal Networks (LCNs), which were not previously addressed in this context. It proposes a more efficient search method compared to the brute force approach for MAP calculation.
- Unlike previous studies, this research reduces complexity by using an approximation method to address time complexity.
- This approach is theoretically well-proven and demonstrated through experimental results.
- By adding a simple method called Message Parsing Approximation to LDS and SA, performance was effectively enhanced.
- The practical application of the algorithm is shown, proving its usefulness and suggesting future development directions.
- Since this paper shows that LCN's knowledge expression is better than other existing research, many follow-up studies using LCN could be conducted.
Weaknesses: Comparison with prior work & Experiments
- Mention how you build upon from the previous works by Radu Marinescu et al. The introduction mentions that existing approaches use heuristic algorithms or DP algorithms for MAP and MMAP inference, and this paper does not employ a significantly different method.
- If this is a follow-up to the aforementioned research, wouldn't it have been better to demonstrate the performance of the previous research with the addition of Message Passing Approximation? The current paper compares the method added to LDS and SA. The advantages of using SA and LDS compared to previous research are not clearly demonstrated.
- While I agree that there may not be existing studies that have introduced LCNs, there are certainly prior studies that have tackled MAP and Marginal MAP tasks. It would be beneficial to include baselines comparing the performance of the proposed algorithms with those of existing studies. Without such baselines, the current experimental results only allow for comparisons among the proposed algorithms themselves, making it difficult to ascertain whether these algorithms are superior to those from other research. The absence of baseline comparisons hampers a complete understanding of the contribution of the proposed methods.
- A comparison and introduction of MAP estimation methods in Credal Networks (CN) and Bayesian Networks (BN) would have been beneficial to understand the practical advantages over these existing methods. (Probably author thought this was the scope of the prior work - Logical Credal Network)
- It is unclear whether ALDS and ASA are needed instead of AMAP. Experimental results show that AMAP consistently outperforms in terms of CPU time(in Tables 1, 2, 3), and the auxiliary measure of solved problem instances(in Tables 1, 3) is always 10/10 with AMAP. These results raise questions about the necessity of ALDS and ASA. Although the paper attempts to address these concerns with Figure 2, but it demonstrates that sufficient problem-solving can be achieved without the optimal LCN, suggesting that the approximation methods of ALDS and ASA may be inefficient.
Presentation
- There is insufficient evidence for the statement in LINE 24: "Logical Credal Networks (LCNs) have focused exclusively on marginal inference, i.e., efficiently computing posterior lower and upper probability bounds on a query formula."
- The main contribution seems to be Algorithm 4, but its emphasis is not different from Algorithms 1-3, which perform inference through DFS, making it difficult to identify the core of the paper.
- Line 36: Is the term "probability bound" more accurate than "imprecise probability."?
- Section 3 lack sufficient explanation of the proposed methodology, making it difficult for readers to fully understand. Additionally, the paper lacks a clear analysis of the time complexity and space complexity of the algorithms, which is crucial for evaluating their efficiency.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Could you provide more detailed descriptions of the algorithm implementation? Additionally, could you include a clear analysis of the time and space complexity of the algorithms to help evaluate their efficiency? I recommend this paper should answer these questions in the Appendix. (Could be in the supplementary material)
- Could you include baselines comparing the performance of the proposed algorithms with those of existing studies? Without such baselines, it is challenging to determine whether your algorithms are superior to existing research.
- Could you clarify the specific advantages of ALDS and ASA over AMAP, and provide additional justification for their inclusion? How do these algorithms contribute to this paper?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - As the authors mentioned, the method still relies on heuristic approaches for finding MAP, indicating a need for further development. The proposed method is not yet practical for real-world applications without additional improvements, as evidenced by the computational overhead and limited scalability shown in the experiments.
- Arbitrary Setting of Hyperparameter Values:
* Size of problem (n): Figure 3 shows a trend where CPU time increases as the discrepancy increases for n = 7. However, Table 1 shows experiments conducted for n = 5, 8, 10, 30, 50, 70 for the same datasets. It is unclear why only Figure 3 uses other values, indicating an arbitrary setting of hyperparameters without a specific criterion.
* Discrepancy (delta): Table 1 and Table 2 use a discrepancy value of 3, whereas Table 3 uses a discrepancy value of 2 for the experiments. The rationale behind these hyperparameter settings needs to be clarified. Setting discrepancy values without a consistent criterion undermines the study's consistency and can confuse interpreting the results. Furthermore, a discrepancy of 2 appears to be an elbow point, suggesting it is an optimal parameter value for ALDS(2). Maybe, results for the large data(n = 30, 50, 70) could have been obtained using ALDS(2).
* Contexts per atom (k): In Table 3, since k is fixed to 2, you don't have to indicate repeatedly it in the table. Also, it would be better to append the results of experiments with various k.
- Uncertain Contribution: While this paper proposes various algorithms based on LCN, it is questionable whether the introduction of LCN is necessary for solving Marginal MAP. I summarize the points previously mentioned from this perspective. Firstly, in the performance of the approximation algorithms, it is evident that AMAP, which relatively fails to find the optimal LCN, performs better. This result weakens the necessity of LCN to solve the marginal MAP problem. Additionally, it is difficult to prove the superiority of LCN without a theoretical comparison (e.g., time complexity, space complexity) or experimental comparison (i.e., experimental results) between LCN-based algorithms and existing algorithms. To prove the contributions of this paper, it seems necessary to supplement these aspects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback. We provide responses to your questions and concerns below.
The MAP and MMAP inference tasks have been extensively studied over the past decades in the context of classical graphical models such as Bayesian networks or Markov networks. However, algorithms developed for those models such as variable elimination or depth-first AND/OR branch and bound are not directly applicable to LCNs and therefore a direct comparison with those kinds of methods is not possible. For example, the AND/OR search algorithms for MMAP in Bayes nets presented in [Marinescu et al, JAIR-2018] are guided by a heuristic function derived from a variational bound on MMAP in Bayes nets. That bound cannot be computed in an LCN and therefore the AND/OR search algorithms mentioned simply do not work on LCNs.
Previous papers on LCN have only focused on exact and approximate algorithms for computing upper and lower probability bounds on a query formula (also known as marginal inference). Virtually nothing is known about MAP and MMAP inference in LCNs. Therefore, the contribution of our paper is to bridge this gap and present the first ever exact and approximate algorithms for solving MAP and MMAP queries in LCNs.
Regarding the experiments, in Tables 1, 2 and 3 we chose a discrepancy value such that the search space explored by LDS/ALDS would have a comparable size to that of SA/ASA. However, we experimented with many more discrepancy values (up to 7) but observed that a larger discrepancy value has a negative impact on running time and we illustrate this behavior with Figure 3.
Q1: We included the actual Python implementation of the proposed algorithms in the supplementary material (see the exact_map.py and approx_map.py scripts) and we are currently in the process of open sourcing our code. Theorems 1, 2, 3 and 4 in the main paper provide the time and space complexity bounds of the proposed algorithms. Their proofs are included in the supplementary material.
Q2: To the best of our knowledge, our paper provides the first study on MAP and MMAP inference in LCNs and therefore there are no other baseline algorithms to compare with on these two tasks.
Q3: Algorithms ALDS and ASA could potentially improve the solution found by AMAP. More specifically, the initial solution found by AMAP is most likely a local optima, but if more time is available then we can use ALDS/ASA to search for a better solution. Figure 2 in the main paper is meant to illustrate the benefit of using ALDS/ASA on top of AMAP. In this case, both ALDS and ASA were initialized with the solution found by AMAP and the plot shows how many times ALDS/ASA found a better solution compared with the initial one upon exceeding the time limit. We will expand the discussion in the paper to emphasize the benefits of ALDS/ASA over AMAP.
---
Rebuttal 2:
Comment: Thank you for answering my questions.
As another reviewer commented, it would be beneficial to initiate a discussion with the audience on how we can incorporate the idea of symbolic reasoning (LCN) in the era of large neural networks. Potentially, neural networks could provide some unknown LCN sentences that were missed by domain experts, making the overall framework more comprehensive.
As the last reviewer (ZTSx) mentioned, comparing your method with approaches outside of LCN could be helpful to justify that LCN is also practically useful among diverse solutions. Additionally, providing more discussion on the benefits of ALDS/ASA over AMAP would help in the detailed understanding of the methods.
Although I am not an expert in the field, I believe this paper is beneficial in advancing research towards building a System 1+2 engine. I look forward to seeing more contributions in neuro-symbolic abductive reasoning. I will maintain my positive score.
I would like to suggest lowering the entry barrier of this manuscript for audiences who aren't familiar with LCN and MMAP. For instance, including toy examples of MAP and MMAP in medical diagnosis or fault detection could effectively illustrate how these inference tasks are applied in real-world scenarios. While the supplementary material includes LCNs for multiple real-world scenarios, some readers may also require background knowledge on MAP and MMAP. Consider reorganizing the manuscript and supplementary materials to enhance readability.
-----
These are still a few remaining questions, so I would like to ask for some additional clarification.
> Q1. Could you provide more detailed descriptions of the algorithm implementation? Additionally, could you include a clear analysis of the time and space complexity of the algorithms to help evaluate their efficiency? I recommend this paper should answer these questions in the Appendix. (Could be in the supplementary material)
> A1. We included the actual Python implementation of the proposed algorithms in the supplementary material (see the exact_map.py and approx_map.py scripts) and we are currently in the process of open sourcing our code. Theorems 1, 2, 3 and 4 in the main paper provide the time and space complexity bounds of the proposed algorithms. Their proofs are included in the supplementary material.
We feel that an additional explanation in the proof would be beneficial for clarity. For instance, in the proof of Theorems 1, 2, and 3, it is mentioned that the complexity is O(2^2^n) because the LCN has 2^n interpretations, but this point could be elaborated further to ensure a clear understanding. Regarding the proof of Theorem 4, the claim that "the complexity of algorithm AMAP is dominated by the complexity of the factor-to-node messages" could benefit from additional clarification. Specifically, it would be helpful to show the time complexities of the other parts of algorithms to support this claim.
> Q2. Could you include baselines comparing the performance of the proposed algorithms with those of existing studies? Without such baselines, it is challenging to determine whether your algorithms are superior to existing research.
> A2. To the best of our knowledge, our paper provides the first study on MAP and MMAP inference in LCNs and therefore there are no other baseline algorithms to compare with on these two tasks.
We are curious about the practical benefits of using the proposed MAP/MMAP algorithms with LCNs compared to Bayesian networks and Credal Networks. In scenarios where LCNs are expected to be advantageous due to the richer information they encode, it would be helpful to know the extent of these benefits. Additionally, since LCNs use more information, is there a concern regarding potential overfitting?
> Q3. Could you clarify the specific advantages of ALDS and ASA over AMAP, and provide additional justification for their inclusion? How do these algorithms contribute to this paper?
> A3. Algorithms ALDS and ASA could potentially improve the solution found by AMAP. More specifically, the initial solution found by AMAP is most likely a local optimum, but if more time is available then we can use ALDS/ASA to search for a better solution.
Figure 2 in the main paper is meant to illustrate the benefit of using ALDS/ASA on top of AMAP. In this case, both ALDS and ASA were initialized with the solution found by AMAP and the plot shows how many times ALDS/ASA found a better solution compared with the initial one upon exceeding the time limit. We will expand the discussion in the paper to emphasize the benefits of ALDS/ASA over AMAP.
You mentioned that AMAP is likely to get stuck in local optima, and we would like to understand the reason behind this. Additionally, we are interested in learning more about the specific situations where ALDS and ASA perform better relative to each other. Insight into the conditions under which one algorithm outperforms the other would be valuable. | null | null | null | null | null | null |
SongCreator: Lyrics-based Universal Song Generation | Accept (poster) | Summary: This paper introduces SongCreator, a novel song-generation system designed to create complete songs with both vocals and accompaniment from given lyrics, addressing a significant gap in music generation. The system incorporates a dual-sequence language model (DSLM) and an innovative attention mask strategy, enabling it to understand, generate, and edit songs for various music-related tasks. Extensive testing shows that SongCreator performs exceptionally well, outperforming previous models in tasks such as lyrics-to-song and lyrics-to-vocals generation, and offers the unique ability to control the acoustic characteristics of vocals and accompaniment independently.
Strengths: 1. The paper tackles an important topic within the field of song generation, a complex and evolving area of AI research.
2. Detailed and comprehensive experiments are conducted to validate the effectiveness of the proposed models and methods.
3. The work is thoroughly developed, presenting a holistic approach from theory to practical application.
Weaknesses: 1. The introduction lacks clear logic and motivation, making it difficult to discern the unique contributions of this work compared to existing models like Suno and Udio. The discussion about the applicability of AI-generated content across various media types feels outdated, especially given the recent advancements in music generation technologies.
2. The presentation of related works in Table 1 is confusing; Jukebox is omitted from the table yet discussed extensively in the text. The paper primarily frames its contributions as extensions of Jukebox, which may understate the originality of the proposed methods.
3. The exclusion of prominent music generation products like Suno and Udio from a detailed discussion is a notable oversight. This could either be addressed comprehensively within the limitations section or by providing a clearer comparison within the main text to delineate how this work differentiates from those products.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does the proposed model handle the arrangement task without access to traditional music sheets?
2. While SongComposer operates on MIDI files, how does the proposed model manage composition tasks directly from audio files?
3. Given that harmony is a subjective quality and a fundamental goal of music generation, how do the authors define and assess harmony in their evaluations, and why do references [15-25] lack this aspect?
4. Does the term "universal song generation" carry specific implications within this context, and is its universality considered a significant contribution of this work?
5. Is this the first model capable of generating both vocals and accompaniment from lyrics alone? What unique capabilities does your model have that distinguish it from others?
6. The related work section touches on singing voice synthesis and speech editing but lacks a detailed discussion on lyric-based music generation. What is the rationale behind this selection?
7. Figure 1 lacks a clear caption explaining its elements, which could lead to confusion. Clarifications on what the audio icons and the term "song" represent would be beneficial.
8. On line 215, how was the decision made to allocate 80% and 20% in your methodology?
9. The structure of input and output for both training and generation phases appears complex. Could you clarify whether the model supports multiple combinations of lyrics, vocals, and accompaniments during these phases?
10. How is the song editing task implemented? How does the system manage edits that only modify part of the lyrics but require corresponding changes in vocals and accompaniments? It would also be beneficial to understand the robustness of the editing performance across diverse and extensive datasets.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The most critical limitation noted is the suboptimal audio quality, which appears fragmented and affects the overall user experience with the generated music. This issue could significantly impact the practical deployment and acceptance of the proposed model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful reading of our paper. We hope the following addresses all concerns mentioned.
**Regarding the differences from Suno and Udio**
We appreciate the reviewer’s constructive comments and will revise the introduction to better highlight our unique contributions. Since these products do not publicly disclose their methods, detailed comparisons are challenging. However, our main innovations and contributions are the design of the DSLM and the corresponding attention masking strategies. For tasks like song generation, which require generate two temporally aligned sequences such as vocals and accompaniment, DSLM offers significant advantages over independent single-stream or dual-stream modeling, and enabling independent control over generated vocals and accompaniment. We believe it is a novel solution to these tasks.
Furthermore, our model achieve diverse song generation capabilities in a unified framework by the atttention mask strategy, such as accompaniment-to-song, song editing and vocals editing in song, which Suno and Udio cannot currently achieve. Of course, due to limitations in resources and data, there are still gaps in audio quality and control through textual descriptions compared to these products. These limitations are noted in the paper, and we will provide a clearer comparison in the final version.
**Regarding the introduction of Jukebox**
We are thankful for the reviewer's constructive comment. Jukebox is the first and only published literature attempting song generation and should be cited in Table 1. It models vocals and accompaniment as a single entity, leading to several limitations. Our work is not an extension of Jukebox; we propose a completely different framework with DSLM and attention mask strategy specifically designed for song generation. Our approach enhances the musicality and quality of generated songs while providing flexible control, generation, and editing capabilities. As mentioned in the first response, our model achieves diverse song generation tasks in a unified framework. Most of these tasks have not been accomplished by previous models, including Jukebox. Multi-task learning further improves the model’s performance across these tasks.
**Regarding the arrangement and composition tasks**
As the reviewer mentioned, as an end-to-end generative model, our proposed model does not handle arrangement and composition tasks by explicitly predicting traditional music sheets or MIDI files. Instead, we train the model to directly generate natural and musical accompaniment, vocals and song based on given conditions, such as lyrics. The outputs of the model are audio, not music sheets or MIDI files. This approach enables our model to learn the knowledge for arrangement and composition and to generate songs without traditional music sheets or MIDI files.
**Regarding the harmony**
We define harmony as whether the vocals and accompaniment sound harmonious and pleasant together. This is crucial when generating natural-sounding songs that include both vocals and accompaniment. The works referenced in [15-25] focus on generating either vocals or accompaniment music alone, so the concept of harmony is not applicable. This perspective comes from SingSong, which involves generating instrumental music that can be naively mixed with the input vocals, and we define it as harmony.
**Regarding the universal song generation**
The term “universal song generation” refers to our model’s ability to perform various song generation tasks beyond lyrics-to-song, including editing, continuation, and generation from pre-determined track. This universality and flexibility are significant contributions and unique capabilities of our work. Additionally, it enables multi-task training, which further enhances the model’s generation capabilities.
**Regarding the unique capabilities**
As mentioned in the second response, Jukebox was the first model capable of generating both vocals and accompaniment from lyrics alone. However, our model improves the musicality of generated songs and offers more diverse song generation features, as detailed in the first and second responses.
**Regarding the selection of related work**
Jukebox is the only published literature on lyric-based music generation. We have provided a detailed introduction to Jukebox in the introduction section, so we chose not to repeat it in the related work.
**Regarding the Figure 1 Caption**
Thank you for the constructive comment. We will make the necessary revisions in the final version of the paper.
**Regarding the audio quality and using the None strategy 20% of the time**
We take these seriously and have provided a detailed explanation in the global rebuttal section at the top.
**Regarding the structure of input and output**
As the reviewer mentioned, our model supports multiple tasks, each with various combinations of lyrics, vocals, and accompaniments as input. This demonstrates the model's capabilities and flexibility. Specific input and output structures and corresponding tasks are detailed in Table 2. During training and generation, we set up various input and output combinations according to the tasks.
**Regarding the implementation of song editing**
We thank the reviewer's question about our editing task. In this task, users provide the edited lyrics with the start and end points of the segment to be edited. The segment following the end point is used as a prompt, while the segment preceding the start point is treated as already generated part, with a special <EDIT> token separating the two. Since the LM is trained in an autoregressive manner, the system continues generating the edited segment based on the already generated part and then seamlessly transitions into the prompt segment. To test editing performance, we manually constructed a dataset of 30 examples, encompassing songs of different styles and performed by different singers.
---
Rebuttal Comment 1.1:
Comment: Several critical issues remain after reviewing the response:
**Unconvincing Performance Demonstration:**
As an audio-based music generation work, this paper does not compare with models such as Suno and Udio. It is not convincing that the lack of comparison is because Suno and Udio do not publicly disclose their methods (mentioned in the response). In most cases, a user input is sufficient for Suno and Udio to generate results for comparison. I have used this method to generate music and compared it with the music generated by this paper for tasks such as lyrics-to-song, lyrics-to-vocals, accompaniment-to-song, vocals-to-song, music continuation, song continuation, vocals continuation, and accompaniment-to-song (no lyrics). The proposed model fails to generate high-quality music in terms of vocal pronunciation, fluency, and background noise. Additionally, the proposed model only generates music at the phrase level, while Suno and Udio can generate music with a complete structure of multiple sections.
**Lack of novelty:**
The key contribution claimed by the paper is the encoder-decoder architecture on two audios. However, the use of encoder-decoder architecture in music generation is not novel. For example, various works from different groups have reported relevant work from 2020 to 2024, as shown below:
[1] Dhariwal, Prafulla, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. "Jukebox: A generative model for music." arXiv preprint arXiv:2005.00341 (2020).
[2] Donahue, Chris, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe Esling, Andrea Agostinelli, Mauro Verzetti et al. "Singsong: Generating musical accompaniments from singing." arXiv preprint arXiv:2301.12662 (2023).
[3] Zhiqing, Hong, Huang Rongjie, Cheng Xize, Wang Yongqi, Li Ruiqi, You Fuming, Zhao Zhou, and Zhang Zhimeng. "Text-to-Song: Towards Controllable Music Generation Incorporating Vocals and Accompaniment." arXiv preprint arXiv:2404.09313 (2024).
---
Reply to Comment 1.1.1:
Title: Response to the reviewer
Comment: Thanks for the reviewer's insightful comments. We would like to reply point by point here:
**Regarding Novelty**
Firstly, we would like to note that our primary focus is on **improving the modelling of discrete music representations** within the encoder-decoder architecture to enhance the musicality of generated songs. As mentioned in lines 59-73 of the paper, different from the single-stream language model (LM) used in Jukebox, Singsong and Text-to-Song, our key innovation lies in **introducing a novel LM, DSLM, along with corresponding attention masking strategies.** It significantly enhances the musicality of generated songs by **improving performance for dual-sequence modelling**, and **completes song generation tasks of various forms in a single model**.
1. **Improving Performance for Dual-Sequence modelling**: For tasks involving aligned sequences such as vocals and accompaniment, DSLM demonstrates superior performance. In our experiments, we have compared our model with various LM-based models:
- MusicLM (Similar to Singsong) uses LMs for semantic tokens followed by acoustic tokens.
- MusicGen (Similar to Text-To-Song) uses transformer-based models to directly predict acoustic tokens.
- GPT (Similar to Jukebox) uses autoregressive transformers to model discrete tokens.
Our experimental results indicate that DSLM outperforms these LM-based single-stream modelling approaches in most tasks. Additionally, DSLM supports independent control over generated vocals and accompaniment, which is beyond the capabilities of the mentioned previous works.
2. **Accomplishing Various Song Generation Tasks in a Single Model**: As mentioned earlier, DSLM can perform multiple tasks within a single model, such as generation, editing and accompaniment-to-song, and multi-task learning further enhances the musicality of generated songs. These advantages are beyond the capabilities of previous literature works, which required training specialised models for each task. We believe our approach provides valuable insights for other dual-sequence modeling tasks.
**Regarding Comparison with Suno**
We are thankful for the reviewer's comment and would like to discuss this issue to address the reviwer's concerns.
Firstly, it is challenging to make a fair comparison between our proposed method and Suno. Due to constraints related to **data collection costs and music copyright**, our dataset is relatively small (270,000 songs compared to Jukebox's 1.2 million) and of lower quality (primarily sourced from non-professional singers online). These limitations significantly affect the fluency, background noise, and vocal pronunciation in the generated songs. As a commercial product, Suno likely has access to **more extensive and higher-quality data**. Nevertheless, it is noteworthy that our generated songs **achieve musicality and quality close to the ground truth samples** in most tasks. This indicates that DSLM effectively maximizes performance despite the data limitations. We believe that increasing the quantity and quality of data will further enhance the results.
Secondly, our focus is on better modelling music representations rather than improving the encoding and decoding processes of music. To ensure **fair comparisons**, we conducted all experments using **the same components (BEST-RQ, LDM, and Encodec) for audio encoding and decoding**. This approach aligns with previous works, such as MusicGen and Singsong, to prevent the influence of different encoding and decoding methods. Experiments demonstrate the strong performance and flexibility of DSLM, capable of **handling multiple song generation tasks with a single model** and **outperforming specialized baselines in most tasks**. In contrast, Suno's audio encoding and decoding methods are not disclosed, making **it difficult to rule out the influence of these modules**. Additionally, audio encoding and decoding are areas of long-standing research, significantly impacting audio quality, noise, and clarity. We plan to explore this in future work to enhance the quality of synthesized songs.
Finally, our model supports several capabilities that Suno and previous works do not, such as **song editing, vocal editing, and vocal editing in songs**. As ***#Reviewer H7YT*** mentioned, these diverse editing capabilities are highly practical for music production. In **accompaniment-to-song and vocals-to-song**, our model also differs from Suno's. We follow the requirements of previous works to ensure that **the input tracks remain unchanged** in the final output, whereas Suno may alter the content and melody of the input tracks. This demonstrates that DSLM offers **a broader set of capabilities**, and is the first attempt in music generation to integrate such diverse capabilities within a single model rather than relying on multiple specialized models for music generation. | Summary: The paper presents a novel approach for lyrics-based song generation. The method leverages language models for semantic tokens modeling and then applies latent diffusion model to generate final music. A dual-sequence language model (DSLM) is introduced to not only handle vocals and accompaniment but also integrate them together. The model is applicable to a variety of lyrics-based song generation tasks and extensive experimental results demonstrate its effectiveness.
Strengths: - SongCreator is flexible and applicable to eight different lyrics-based song generation tasks.
- The components in SongCreator are mostly open-sourced and details of training and model hyper-parameters are provided.
- Generated samples are in good quality.
Weaknesses: - While the proposed system looks promising, it requires multi-stage and multiple models during inference. The latency is not discussed.
- Training data heavily relies on the quality of sound separation tool (Demucs). While its quality is ok for two stream (vocal & accompaniment), it becomes more problematic when more streams are separated and therefore has limitations on more instrumental-level controls.
- Semantic tokens (either vocal or accompaniment) are a mixture of multiple features, and therefore it is challenging to support disentanglement control (e.g., tempo).
Technical Quality: 3
Clarity: 3
Questions for Authors: - How do you align lyrics and Voice Activity Detection (VAD) results? could you elaborate more on this?
- Does it support languages in addition to English?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors discussed limitations in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our work. We appreciate the constructive comments the reviewer provided, which will help us further improve our paper. We are delighted to have the following discussion with the reviewer.
**Regarding the latency discussion**
The reviewer is correct that SongCreator comprises four components for successful inference, i.e., a self-supervised model (BEST-RQ), a language model (DSLM), and a latent diffusion model. However, to achieve high-quality generation, state-of-the-art music generation models such as MusicLM [1] consists of 6 components: 3 language models (semantic, coarse & fine), a self-supervised model (w2v-BERT), a prompt encoder (MuLan) and an autoencoder (SoundStream). Similarly, state-of-the-art speech synthesis models like Seed-TTS [2] consists of 4 componets: an self-supervised model (speech tokenizer), a language model, a diffusion model and a vocoder. All of these have similar or even higher complexity compared to our work.
Considering that song generation is a complex task that includes vocal composition, instrumental arrangement and harmonious generation, our primarily focus is on optimizing the musicality and quality of the generated songs, rather than real-time requirements at this stage. Therefore, we have not discussed latency. We hope to further simplify this process to achieve real-time song generation in the future.
**Regarding the impact of Demucs quality and more instrumental-level controls**
The reviewer’s argument is thought-provoking. We have provided a detailed discussion on the impact of Demucs quality in the global rebuttal section at the top, where we explain how our approach mitigates its impact on the overall quality of generated songs. While achieving more instrumental-level controls remains challenging at present, we believe our approach offers valuable insights and assistance in minimizing the influence of separation quality as much as possible.
**Regarding supporting disentanglement control**
The reviewer is correct that disentangling control for semantic tokens, which are mixtures of multiple features, is challenging. While we are not focused on disentanglement control, we believe that our proposed DSLM can be extended to address this problem. One possible approach is to disentangle the elements within the semantic tokenizer, as explored in previous work [3, 4]. Another approach is to introduce textual descriptions to control various attributes and different streams in the generated music (e.g., tempo and different instruments), which has been widely attempted in instrumental music generation. These methods are compatible with our proposed DSLM, giving it the potential to address disentanglement control challenges in the future.
**Regarding aligning lyrics and Voice Activity Detection (VAD) result**
We are sorry that we did not explain the detailed process clearly. Specifically, we employed an automatic speech recognition (ASR) model to provide timestamps for each sentence in the lyrics and a voice activity detection (VAD) model to detect silent segments. We then select appropriate silent segments to split the data into segments of no more than 30 seconds, ensuring the completeness of the sentences. We will include detailed explanation in the final version.
**Regarding support for other languages**
Due to the cost of data collection, our experiments in the paper were conducted only on the English datasets. However, as a generative model, it is not inherently bound to a specific language. With sufficient data for a specific language, the model can be adapted to support generation in other languages.
[1] Lam M W Y, Tian Q, Li T, et al. Efficient neural music generation[J]. Advances in Neural Information Processing Systems, 2024, 36.
[2] Anastassiou P, Chen J, Chen J, et al. Seed-TTS: A Family of High-Quality Versatile Speech Generation Models[J]. arXiv preprint arXiv:2406.02430, 2024.
[3] Zhang X, Zhang D, Li S, et al. Speechtokenizer: Unified speech tokenizer for speech large language models[J]. The Twelfth International Conference on Learning Representations, 2024.
[4] Ju Z, Wang Y, Shen K, et al. Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models[J]. arXiv preprint arXiv:2403.03100, 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I appreciate the clarification for alignment between lyrics and VAD and the possible extension for instrument/disentanglement control. However, I am not sure why authors do not run a simple test for inference latency. I totally understand it requires multiple modules/stages to achieve good music quality just like the SOTA method but since you have already implemented these baselines for comparison, why not include the latency as well. It should give readers a high-level idea how fast it is and if it is not fast, what is the trade-off between performance v.s. speed. I strongly suggest authors adding such a table in the final version.
In addition, while I don't think it is critical to have comparison with SUNO for accept/reject, it would be great to include some examples to see the gap between academic research and commercial product.
Finally, given the flexibility of the proposed system and overall performance, I believe the paper is valuable to appear in NeurIPS but I will recommend authors considering the above comments. I will keep my original score.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer
Comment: We appreciate the reviewer's overall positive feedback and constructive comments.
Thanks very much for the suggestion to conduct test for inference latency. We would like to supplement the evaluation by comparing the real-time factor (RTF) for SongCreator and other baselines. RTF represents the time (in seconds) required for the system to synthesize one second of waveform. The evaluation was performed on a single NVIDIA V100 GPU with a batch size of 1. We randomly selected 20 generated audio samples, each longer than 20 seconds, to conduct the evaluation. These additional results will be included in the final paper.
| model | RTF |
| ------------------- | ------ |
| MusicLM | 14.545 |
| MusicGen | 2.104 |
| GPT | 1.525 |
| GPT (Vocals & Song) | 3.059 |
| SongCreator | 2.793 |
The results indicate that methods utilizing a single LM module are significantly faster than MusicLM, which employs multiple LMs in cascading manner. Taking into account the experiments corresponding to Table 3 in the paper, we observe that although GPT and MusicGen, which only model the song token sequence, are faster than GPT (Vocals & Song) and SongCreator, which predict multiple sequences, this gain in speed comes at the cost of reduced performance. In comparison to GPT (Vocals & Song), our proposed SongCreator, which leverages DSLM to simultaneously model both vocals and accompaniment, achieves not only faster speeds but also better results.
Furthermore, we acknowledge the comments related to Suno and will include Suno-generated samples on the final demo page as advised. | Summary: The authors present SongCreator, a music generation system capable of simultaneous generation of vocals and accompaniment tracks. SongCreator consists of a language model generating two streams of BEST-RQ ([57] in the paper) semantic tokens, one for the vocals and the other for the musical accompaniment, a non-autoregressive transformer mixing the two streams, followed by a latent diffusion model translating the semantic tokens to VAE latents that is then decoded back to audio. The model is conditioned on lyrics and optional style audio prompts for either vocals or accompaniment.
The authors suggest using Bidirectional Cross Attention (BCA) for the two-stream language model, which is the main modeling contribution of the paper. A thorough ablation study is performed suggesting that BCA is crucial for coherent generation of songs - i.e. music with both vocals and instrumental accompaniment.
Additionally, the zero-shot voice cloning capabilities of SongCreator is demonstrated through comparison to baselines such as VALL-E ([9] in the paper), showing superiority in terms of singer voice similarity.
Finally, the benefits of multi-task training is demonstrated, and in specific, in an interesting contribution, the authors show that vocal generation benefits from dual accompaniment and vocal generation objectives seen during training.
Strengths: 1. A diverse lyrics editing capabilities of SongCreator is demonstrated. Namely three variations: Direct editing of the mixture track, editing a separate vocal track or editing the vocals given an accompaniment track. This is highly practical for music production, and demonstrates the flexibility of the proposed system.
2. The design of the Bidirectional Cross Attention (BCA) between the vocal semantic decoder and the accompaniment semantic decoder is an important modeling contribution. Moreover, it is validated extensively throughout the experiments section (section 4), demonstrating clear advantage of BCA compared to both independent two stream modeling and single stream modeling alternatives.
3. The superiority in the SECS metric compared to a VALL-E like architecture is an important contribution (table 6), demonstrating effective zero-shot voice cloning capabilities of the proposed model.
4. The significant increase in vocal generation quality, when given an accompaniment track is an important contribution, demonstrating the effectiveness of the Dual Sequence Language Modeling (DSLM) in learning from temporally aligned auxiliary musical signals.
5. The authors perform an extensive experimentation in order to validate their modeling design choices. A wide range of baselines is implemented using reproduction of prior work, in addition to ablation study on the main components of SongCreator.
Weaknesses: - The quality of samples, as demonstrated in the demo page, is relatively low compared to prior work.
- As stated in line 361, SongCreator cannot control and doesn't support global textual descriptions of genre, style or instrumentation. This is a major weakness compared to prior work.
- line 119 - The decision to use best-RQ as the semantic tokenizer should be supported with an ablation study, comparing it to open-source alternatives such as MusicFM [1] or MERT [2]. In addition, it is unclear how did the authors validate it indeed "encapsulate sufficient semantic and acoustic details" as stated in line 121?
- Though the baseline set is broad, neither baseline is an official checkpoint. All baselines are reproductions of prior work, which lowers the reliability of comparison. The comparison in table 15 reveals a significant gap in performance comparing to the SingSong official samples.
- It is unclear which samples were used for subjective evaluation. In specific, a crucial factor is whether a source separated data was used or studio stemmed data. A source separated data may bias the results towards models with audio prompts due to information leakage between the artificially extracted stems.
- No planned model checkpoints release, which reduces the reliability and reproducibility of the research significantly.
[1] A Foundation Model for Music Informatics, Won et al. 2023
[2] MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training, Li et al. 2023
Technical Quality: 4
Clarity: 4
Questions for Authors: - line 215 - what is the rationale behind using the None strategy 20% of the time?
- The sentence in lines 149-153 needs rephrasing. It is difficult to understand.
- Table 3 - how do the authors explain the opposite trend in FAD compared to the trend in subjective quality?
- Table 4 shows a significant increase in vocal quality, when given an accompaniment track. Did the author validate the diversity of such a model, in the sense that there's no information leakage from the artificially separated accompaniment track?
- In the detailed description of the dataset, in line 607, it is unclear whether "separate instrumental music and vocals" refers to stemmed data, based on studio recorded tracks, or to single instrument / a cappella performances.
- table 9 - music continuation study - why was the reproduced MusicGen omitted from this study?
- The quality of the "original song" samples from the demo page is relatively poor. In the paper, it is reported that a sampling rate of 44.1kHz is used in the work. What is the reason for the relatively low quality of the original data samples?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations of their work, as well as the broader impact of the proposed system. In specific, potential negative usage of the voice cloning capabilities of SongCreator is discussed. Moreover, the authors decide not to publish model checkpoints due to the harmful potential of such feature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's overall positive response and the consstructive comments provided. We address the concerns and questions below.
**Regarding the audio quality, semantic tokenizer and using the None strategy 20% of the time**
We take these concerns seriously and have provided a detailed explanation in the global rebuttal section at the top.
**Regarding the control through textual descriptions**
The reviewer is correct that the current model cannot control the generated songs through textual descriptions. This limitation is mainly due to the dataset. Open-source datasets only contain instrumental music with textual descriptions and lack song data with textual descriptions. Collecting and annotating song data with textual descriptions requires significant time. We look forward to addressing this issue in future work.
**Regarding the baseline set**
First, we would like to note that only Jukebox has implemented song generation. In section 4.2, we compared our model with Jukebox's official samples, showing that SongCreator was preferred in 60% of cases. Other baselines are SOTA models for instrumental music generation or speech synthesis, which can’t generate songs with both vocals and accompaniment. To ensure a reliable and fair comparison, we reproduced these models on our song dataset based on reliable open-source code.
Regarding the experiments corresponding to Table 15, it is important to note that the six samples provided by SingSong’s official sources are (at least partially) cherrypicked. In contrast, we used non-cherrypicked samples for all experiments. Additionally, in our reproduced results, SingSong performs comparably to our model in the subjective evaluations of the Vocals-to-song and Accompaniment-to-song tasks. We speculate that the better performance in their official demo is mainly due to their use of a larger and higher-quality dataset.
**Regarding the samples used for subjective evaluation**
We thank the reviewer for reminding us to introduce the samples used for subjective evaluation. During subjective evaluation, we did not use audio prompts in any experiments except for the prompt-based experiments in Tables 5 and 6, to ensure a fair comparison.
**Regarding the reliability and reproducibility of the research**
Due to concerns over music copyright and the potential misuse of voice cloning capabilities, we do not plan to release the checkpoints trained on the full dataset. To better assist readers in reproducing the experiments in the paper, we have provided detailed descriptions of the model structure and hyperparameter settings and plan to open-source our code in the future.
**Regarding the sentence in lines 149-153**
We appreciate the reviewer’s constructive comment. We will revise this part for clarity and ease of understanding.
**Regarding the opposite trend of FAD compared to subjective quality in Table 3**
Thanks for the reviewer's insightful comments. We believe one possible reason for this phenomenon is the inconsistency in evaluation criteria. On one hand, subjects focus mainly on the clarity and intelligibility of the songs, specifically whether the lyrics are accurately conveyed. On the other hand, FAD evaluates the overall quality and fidelity of the audio. Additionally, prior work [1] suggests that existing objective quality metrics, including FAD, fail to reliably predict the perceptual quality of generative music. We also observed a similar phenomenon in MusicGen. Given that we are assessing the combined vocals and accompaniment, we believe the perceptual quality in the listening study is more reliable.
**Regarding the increase in vocal quality in Table 4**
We thank this comment and apologize for the confusion caused by our presentation. In the experiments presented in Table 4, we did not provide the model with an accompaniment track. The difference between SongCreator and SongCreator (Vocal Only) lies in whether BCA and the accompaniment decoder are used during inference. SongCreator (Vocal Only) uses only the vocal decoder, while SongCreator uses the same setup as in the lyrics-to-song task, generating both vocal and accompaniment tokens before using the obtained vocal tokens to generate vocals. This means the accompaniment track in this part is still generated by the model, not artificially provided.
Furthermore, to prevent information leakage from artificially separated tracks in the accompaniment-to-song and vocals-to-song tasks, we added noise to the inputs to conceal artifacts and used only semantic tokens as inputs. This approach has been validated for its effectiveness in SingSong.
**Regarding the “separate instrumental music and vocals”**
This refers to non-vocal music data and a cappella performances rather than artificially separated tracks.
**Regarding the omission of MusicGen from the music continuation study**
First, we would like to note that MusicGen does not emphasize its capability for music continuation, focusing mainly on text-to-music generation. Therefore, we did not consider its music continuation ability. And as shown in MusicGen's paper and our experiments in Table 3, MusicGen performs worse compared to the Flattening approach. Consequently, we chose AudioLM, which uses the Flattening approach, as the baseline for music continuation.
**Regarding the reason for the relatively low quality of the original data samples**
Thanks for the reviewer’s insightful comments. High-quality song data from professional singers are often strictly copyrighted, so most of our data comes from performances by non-professional singers on the internet. Although these data samples have a sampling rate of 44.1kHz, their overall quality is relatively low due to the limitations of the recording environment and equipment.
[1] Vinay A, Lerch A. Evaluating Generative Audio Systems and Their Metrics[C]//Ismir 2022 Hybrid Conference. 2022.
---
Rebuttal 2:
Comment: I thank the authors for the clarifications. My concerns were adequately answered, and my score would remain unchanged.
The only thing that remained unclear to me is the quality of the "original song" samples in the demo page. Were this samples taken from the DISCO-10M ([68] in the paper), or from the in-house datasets? In both cases, the low-quality isn't fully explained by recording environment and equipment. In case the samples were either preprocessed, or processed using one of the encoder models of SongCreator, this should be mentioned both in the paper and in the demo page.
---
Rebuttal Comment 2.1:
Title: Response to the reviewer
Comment: We appreciate the reviewer's constructive comments. We would like to clarify that the "Original Song" samples on the demo page are reconstructed samples, not the original recordings. These samples have been reconstructed using BEST-RQ encoding and LDM decoding to eliminate the potential impact from the encoding and decoding processes during our experiments. Talking into account the reviewer's suggestion, we have updated the text on the demo page to accurately reflect this information. Specifically, we have changed "Original Song" to "Original Song (Reconstructed)" and added a note to explain this. We will also make the necessary revisions in the final paper to ensure that this information is clearly stated. | Summary: The authors introduce a novel system for lyrics-based song generation. It can handle various inputs (lyrics, vocal prompts, accompaniment prompts) and generate different outputs (full songs, vocals only, etc.). The paper proposes a dual-sequence language model (DSLM) that separately models vocals and accompaniment while capturing their interactions through a bidirectional cross-attention mechanism. An attention mask strategy is specifically designed for song generation to song generation tasks of various forms. The authors present competitive performance on eight different song-related tasks.
Strengths: - The model can handle multiple song-related tasks within a single framework, including lyrics-to-song, lyrics-to-vocals, accompaniment-to-song, vocals-to-song, music continuation, and various editing tasks.
- The bidirectional cross-attention mechanism in DSLM enables the model to capture the mutual influences between vocals and accompaniment, contributing to more harmonious and coherent generation.
- The attention masking strategy enables the model to perform various tasks like generation, editing, and continuation within a single architecture.
Weaknesses: - It is no clear how the data is collected, how the audios and lyrics are processed, and what is the input of lyric encoder. (Words or phonemes? If so, how do you obtain it from lyrics?) Will the dataset be open-sourced? If not, it is very challenging to reproduce the experiments in the paper to validate the proposed method.
- It's not clear how to obtain the vocal prompt and accompaniment prompt and how to pass them into the model. It may be better to further explain this issue.
- In Figure 2, I can get the attention mask strategies, but the figure is confusing. The authors should use a straight line with an arrow to illustrate the mask relationship between tokens in two sequences.
- Although the authors describe the effects of different mask strategies, they only list the SA mask and BCA mask strategies for each task in Table 2 without clarifying why these masks are chosen for each task. The authors need to clarify this, even if just for one of the tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Have you compared semantic tokens obtained from different models? Or have you explored other forms of intermediate representations (acoustic tokens or continuous representations)? If so, have you analyzed and compared the differences between these different types of tokens?
- From the reviewer’s experience, Demucs does not perform very well for source separation tasks. The vocal samples often have reverb, which significantly affects the quality of the synthesis. How do the authors address these issues?
- What is the training detail of the baselines in the paper? Are the models trained on the same dataset?
- What is the training strategy for the VAE? What are the components of its loss function?
- The Lyrics-to-vocals samples are quite expressive. They sound natural and exhibit some singing techniques such as trill. What do the authors think contributes to this?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No Limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's overall positive response. We will address the specific suggestions regarding Figure 2 and the information mentioned in the rebuttal in the final version of the paper. The other concerns and questions raised are addressed below.
**Regarding the training dataset**
We thank the reviewer for reading our paper carefully. Our data is collected from the internet, including part of the DISCO-10M and some in-house data. For the audio, we employed an Automatic Speech Recognition model to provide timestamps for each sentence in the lyrics and used a Voice Activity Detection model to detect silence segments. We chose appropriate silent segments to split the data into segments no longer than 30 seconds and ensure sentence integrity. The lyrics are tokenized by the tokenizer of BERT. Due to copyright issues, we can't open-source this dataset. To assist readers in reproducing the experiments, we have provided detailed descriptions of the model structure and hyperparameter settings and plan to open-source our code in the future.
**Regarding the usage of vocal prompt and accompaniment prompt**
We followed the setup from VALL-E. Specifically, the prompt audio is converted into tokens by BEST-RQ and then passed as a prefix to the DSLM. The model uses this prefix to sequentially predict the following token sequence. During training, our vocal and accompaniment prompts are taken from the previous sentence of the target audio. During inference, we randomly select unseen prompts from the test set.
**Regarding the choice of masking strategy for each task**
We agree with the reviewer’s suggestion and will provide detailed explanations in Appendix. Briefly, for sequences that need to be generated, we use a causal mask in SA to support autoregressive generation. For pre-determined track (e.g., accompaniment in accompaniment-to-song or vocals in vocals-to-song), we use a non-causal mask in SA to better encode the contextual representation. Regarding the BCA mask, when both vocals and accompaniment need to be generated simultaneously (e.g., in lyrics-to-song or song editing tasks), we use the BR strategy to consider the interrelationship between vocals and accompaniment. For song generation from pre-determined track (e.g., accompaniment-to-song or vocals-to-song), we use the corresponding A2V or V2A strategy to ensure that the sequence to be generated can consider the full context of the other sequence. For independent sequence generation (e.g., music continuation or vocals editing), we use the None strategy to support independent generation. Ablation experiments and results supplemented in our response to *#**reviewer kmxW*** demonstrate that our chosen masking strategies for different tasks are reasonable.
**Regarding the intermediate representations**
Thank you for the insightful comments. We have provided a detailed explanation about semantic token in the global rebuttal section at the top. Regarding other forms of intermediate representations, we experimented with acoustic tokens extracted from the Encodec model. In the Lyrics-to-song experiment in Table 3, we used GPT and MusicGen models, which have similar hyperparameter settings and structures (autoregressive transformer decoders). However, MusicGen’s prediction target is acoustic tokens, whereas GPT’s prediction target is semantic tokens. The results show that GPT has an advantage in terms of musicality and quality in subjective evaluations.
**Regarding the impact of the Demucs quality**
Noticeably, *#**reviewer nDSW*** also share a similar concern. We take these comments seriously and have provided a detailed explanation in the global rebuttal section at the top.
**Regarding the training detail of the beaseline**
All baselines are trained using similar strategies to those used for DSLM, includes the same dataset, training resources, optimizer settings, and similar parameter scales. Each model was trained for 500K steps. Additionally, for a fair comparison, baselines with semantic tokens as the prediction target (e.g., GPT, SingSong (Diffusion)) shared the same BEST-RQ and LDM modules as DSLM.
**Regarding the training strategy for the VAE**
To train the VAE, we first adopted the pre-trained model provided in DAC, then fine-tuned the encoder and decoder components (i.e., replacing the vector quantizers with a diagonal Gaussian re-sampler as in LDM). We retained the frequency-domain reconstruction loss, discriminators and adversarial loss from DAC and added a KL loss typically used for training VAEs. The VAE was trained on our prepared dataset of 100k hours of songs data, which is the same as the one used for training BEST-RQ.
**Regarding the expressiveness of generated vocals**
The reviewer's argument is thought-provoking, and we are pleased to share our findings. We also noticed this interesting contribution and conducted experiments to verify it. Different from works that only focus on vocal generation, SongCreator generates both vocals and accompaniment tokes before using the obtained vocal tokens to generate vocals. This means that even if the model only generates vocals, the relationships between vocals and accompaniment are also considered. As presented in the experimental results (see Table 4), we find that this approach significantly enhances the musicality of the generated vocals compared to SongCreator (Vocal Only), which only considers vocal generation. This indicates that taking the relationships between vocals and accompaniment into account helps generate more expressive vocals. | Rebuttal 1:
Rebuttal: We sincerely appreciate the detailed feedback and constructive comments from all reviewers, which are extremely helpful to us in revising this paper. We are grateful for your recognition of the **comprehensiveness of our experiments**, and we are also glad that our approach is recognized for **its novelty, strong performance and flexibility**. Initially, we will address the major concerns and issues raised by multiple reviewers in the global rebuttal. Subsequently, we will respond to each of the specific comments made by the reviewers individually.
**Regarding the audio quality**
We acknowledge that current audio quality is limited by the semantic tokenizer and LDM modules used. However, we would like to note that the core contribution of our work lies in the proposed DSLM and attention masking strategies for universal song generation. Given that there are no open-source semantic tokenizer and LDM module for high-quality song generation and the previous works have been limited to generating instrumental music, we retrained the well-prefoming modules on song datasets to validate our proposed approach.
Although the current audio quality is temporarily suboptimal due to the interference between vocals and accompaniment, **our fair comparisons** have demonstrated that DSLM significantly enhances the musicality and intelligibility of generated songs compared to other LM-based methods. Additionally, the proposed DSLM **exhibits diverse capabilities in generating, continuing, and editing songs**, and **supports flexible control and input combinations** — advancements that were not achievable in previous studies.
Indeed, it is noteworthy that DSLM can be paired flexibly with various semantic tokenizers and LDMs, holding the potential for high-quality, universal song generation in the future. Additionally, we are committed to ongoing research into semantic tokenizers and LDMs to enhance the audio quality.
**Regarding the selection of the semantic tokenizer**
Thanks for the reviewers' suggestions. Initially, we carried out preliminary validation experiments using MERT and MusicFM. We found that while these models could reproduce high-quality accompaniment after quantization, **the clarity of the vocals was limited**. Considering that BEST-RQ also performed well in MusicFM’s experiments, we decided to train a BEST-RQ model specifically on song data, incorporating separate instrumental music and vocals to **enhance vocal clarity.** We believe that it encapsulates sufficient semantic and acoustic informations, as evidenced by the retention of key song components—such as lyrics, vocal timbre, instruments, and melody—in the reconstructed audio after converting semantic tokens to audio via the LDM.
In response to the reviewers’ requests, *we plan to incorporate additional ablation studies to compare the performance of our BEST-RQ with open-source alternatives such as MusicFM and MERT.* However, due to the complexity involved in retraining multiple modules, we will include these comparative results in the final version of the manuscript.
Actually, we believe that this will not affect the novelty of DSLM. Our current choice of semantic tokenizer model was primarily to validate the effectiveness of DSLM. As mentioned above, DSLM is adaptable to other semantic tokenizers. The choice of tokenizer may affect the audio quality of the generated songs but does not impact the model’s ability to perform multiple tasks or the musicality of the generated songs.
**Regarding using the None strategy 20% of the training time**
We adopted the None strategy to allow the model to learn to generate accompaniment or vocal track independently, supporting the independent generation tasks like music continuation. But obviously, training the model to capture the relationships between vocals and accompaniment through the bidirectional cross-attention (BCA) is more critical for generating songs. Therefore, we configured the model to employ the BR strategy 80% of the time and the None strategy 20% of the time. This probability setting was inspired by classifier-free guidance related work [1, 2] to ensure it does not disrupt the training of the BCA.
**Regarding the impact of the Demucs quality**
As the reviewers mentioned, Demucs separated vocals can exhibit reverb, which might affect synthesis quality. However, we hope to clarify that it does not significantly impact our proposed method for several reasons.
Firstly, the self-supervised models (BEST-RQ) with vector quantization are **noise-robust**, which is often leveraged in speech synthesis [3, 4]. Secondly, for most tasks where songs are the final generation target, we directly utilize the song tokens generated by the song decoder within our framework, rather than the vocal and accompaniment tokens. During the training of DSLM, target song tokens are extracted from the original songs without separation, and the training loss for vocals and accompaniment tokens primarily helps the model to learn the musicality of the accompaniment, the expressiveness of the vocals, and the relationships between them. Consequently, although Demucs may have limitations, our methodology effectively reduces its influence on the overall quality of generated songs.
[1] Le M, Vyas A, Shi B, et al. Voicebox: Text-guided multilingual universal speech generation at scale[J]. Advances in neural information processing systems, 2024
[2] Du Z, Chen Q, Zhang S, et al. CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens[J]. arXiv preprint arXiv:2407.05407, 2024
[3] Fujita K, Sato H, Ashihara T, et al. Noise-robust zero-shot text-to-speech synthesis conditioned on self-supervised speech-representation model with adapters[C]. ICASSP, 2024
[4] Zhao X, Zhu Q, Hu Y. An Experimental Comparison of Noise-Robust Text-To-Speech Synthesis Systems Based On Self-Supervised Representation[C]. ICASSP, 2024 | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents SongCreator, a system for full song generation including the vocals and accompaniments. The system comprises several steps:
- First a quantizer is trained to tokenize audio, which is used to tokenize the song, vocals, and accompaniments.
- Next, a language model (DLSM) conditioned on various conditioning signals such as lyrics, vocal prompt, accompaniment prompt is trained to predict the tokenized forms of the input conditioning signals. The DLSM consists of two transformer decoders operating on the vocal and accompaniment prompts respectively. The decoders both cross-attend to each other and the authors propose multiple masking strategies for the self and cross-attention modules.
- The final component is a latent diffusion model which generates audio conditioned on the semantic tokens.
The first and last steps are pretrained and based on existing literature while the DLSM is trained using a multi-task setup carefully designed to account for various tasks that the model should be able to handle such as generating songs from lyrics, generating songs from pre-determined vocal/accompaniment track, or song editing.
Strengths: - The authors have conducted a very thorough evaluation of their method using subjective and objective metrics. They have also compared against fairly strong baselines for the various tasks.
- The results reported in the paper show that their proposed DLSM is superior to standard GPT-based LMs for the chosen tasks.
Weaknesses: - The paper lacks some clarity and could benefit from improvements in the illustrations and writing. More specifically, while Fig. 1 gives a good overview of the approach, it seems to be misaligned with what is being presented in the text. At first glance it felt like the text is mentioning that the model accepts text prompts for describing the vocals and accompaniments, but the figures shows those signals passing through the semantic token extractor. The figure is correct, but it took a few minutes to realize that. Similarly, it is not obvious that the attention masking strategies are different for different tasks. Some pre-conditioning in the figure/method section would be beneficial for clarifying the design for the readers.
- The audio quality of the final generations are not very convincing. It is indeed better than Jukebox though.
- The significance of the paper’s contributions seems low. The authors have themselves mentioned that the semantic tokenizer and the LDM are prior work. Furthermore, the overall design is very similar to other recent work. The main difference is in the specific tasks the authors have chosen and indeed I find that the design is useful for those tasks.
Technical Quality: 3
Clarity: 2
Questions for Authors: - One of the major contributions is the use of the BCA and the authors have ablated the utility of that component, however they seem to have skipped any ablation studies on the different masking strategies for BCA. This might be an interesting point of discussion in the paper.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed limitations both in terms of technicality as well as societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thorough review regarding our study. We provide detailed responses to your concerns, as summarized below.
**Regarding the clarity of paper**
We are thankful for the reviewer's constructive comment, which we take seriously to revise this paper for a clearer presentation. Some of the amendments are described below:
1. We will emphasize in the abstract and introduction sections that the current vocal prompts and accompaniment prompts are audio rather than text.
2. We will include pre-conditioning in the introduction and method sections to clarify the use of different attention masking strategies for different tasks. Additionally, we will provide a more detailed explanation of the basis for setting different attention masking strategies in the Appendix.
3. We will refine the style and legend of the vocal/accompaniment prompt in Figure 1 to avoid ambiguity, and further highlight the relationship between different tasks and attention mask strategies in Figure 2.
**Regarding the audio quality**
We acknowledge the reviewer's comment regarding the audio quality. We take this seriously and have provided a detailed explanation in the global rebuttal section at the top.
**Regarding the contribution of the proposed method**
We appreciate the reviewer's feedback and hope to clarify the novelty of our proposed SongCreator. The core contribution lies in DSLM and the corresponding attention masking strategies, a novel approach to **achieve dual-sequence modelling** such as song data including both vocals and accompaniment. In song generation, DSLM offers advantages over independent single-stream or dual-stream modeling, and enabling independent control over the generated vocals and accompaniment. By incorporating various attention masking strategies, our model can complete diverse song generation tasks, such as generation, editing, and understanding, while multi-task training further enhances the musicality of the generated songs. These advantages are beyond the capabilities of previous works, and we believe our approach provides valuable insights for other dual-sequence modeling tasks.
Furthermore, as the reviewer mentioned, our overall framework is a well-validated approach in the audio domain (the combination of LM and Diffusion). However, on the one hand, we are the first to apply this framework to the task of **song generation**, and its performance greatly exceeds that of the previous SOTA Jukebox. On the other hand, the framework is used to validate the effectiveness of our proposed DSLM and attention mask strategy. By comparing our method with other approaches within the same framework, we demonstrated that DSLM not only enables more flexible universal song generation but also achieves state-of-the-art or competitive performance across all eight tasks.
**Regarding the ablation studies on the different masking strategies for BCA**
Thank you for the suggestion. *We have supplemented additional ablation studies on the different masking strategies for BCA. Specifically, we conducted AB preference tests for the lyrics-to-song and accompaniment-to-song tasks.* In lyrics-to-song, we compared BR with A2V and V2A, and in accompaniment-to-song, we compared A2V with BR. The results are as follows:
**Lyrics-to-song**
| BR | A2V | V2A | None | **NP** |
| :--: | :--: | :--: | :--: | :----: |
| 76% | 20% | | | 4% |
| 71% | | 25% | | 4% |
| 85% | | | 14% | 1% |
**Accompaniment-to-song**
| **BR** | A2V | **NP** |
| :----: | :--: | :----: |
| 27% | 59% | 14% |
For lyrics-to-song, the comparison between BR and None has been already presented in the paper (see Figure 4). The results indicate that in lyrics-to-song, replacing the BR strategy with other strategies leads to a significant performance deterioration, demonstrating that **the BR strategy is helpful for the model generate harmonious vocals and accompaniment**. The None strategy, which disregards the relationship between vocals and accompaniment, performed the worst. In accompaniment-to-song, participants preferred the song generated with the A2V strategy. We believe that this is because the **A2V strategy provides more context about the accompaniment sequence when generating vocals**. | null | null | null | null | null | null |
Learning Low-Rank Feature for Thorax Disease Classification | Accept (poster) | Summary: The authors proposed to use Low-rank Feature Learning (LRFL) to improve model performance specifically for thorax diseases classification. Ideas come from the assumption that the low-rank features capture the majority of the information. They implemented the LRFL by adding a self-modified regularization term, and provided theoretical results on the boundaries of the LRFL methods.
Strengths: Rigorous proof, nice writing and relative comprehensive experimentations.
Weaknesses: It seems like the authors was tackling a very large and general topic on improving model performance by learning low-rank features for some specific. There is no specific design related to thorax disease, the X-ray image, or even the medical image. I would assume this method could be applied to any imaging with some background noise. Therefore, since there are existing studies doing low-rank features-related experiments, I would see this is a similar implementation of the low-rank feature learning but very hard to observe valuable novelties.
Technical Quality: 2
Clarity: 2
Questions for Authors: . “truncated nuclear norm as” how did the truncate nuclear norm work? May need references.
2. “Because the actual features used for classification are approximately low-rank and the high-rank features are significantly truncated, all the noise and the information about the background, or the non-disease areas on radiographic images in the high-rank features are largely discarded and not learned in a neural network.” Any reference to support your statement?
3. How does the image dimensions (HW dim & C dim) matched when you use both datasets “ImageNet-1k and X-rays (0.5M)” for pretraining?
4. Table 2, how the decision cutting threshold was chosen? Is it the same across different models/datasets?
5. I have a different thought against the authors. I understand that the low-rank features can maintain the majority of the information. However, some of the nodules or lesions or abnormalities that used for diagnosis are actually small and tiny. Then, by using the low-rank feature approach, the small region information might be lost but will not negatively impact the over-all information that much. How does the method handle this situation?
6. It seems like the authors proposed a very large and general topic on improving model performance by learning low-rank features for some specific. There is no specific design related to thorax disease, the X-ray image, or even the medical image. I would assume this method could be applied to any imaging with some background noise. Therefore, since there are existing studies doing low-rank features-related experiments, I would see this is a similar implementation of the low-rank feature learning but very hard to observe valuable novelties.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below.
**1. ...how did the truncated nuclear norm work? May need references.**
The truncated nuclear norm is defined in line 171 of our paper. Existing works [1,3, 4] perform low-rank learning by minimizing the truncated nuclear norm of the feature matrix.
**2. “Because the actual features used for classification are approximately low-rank and the high-rank features are significantly truncated, all the noise and the information about the background, or the non-disease areas on radiographic images in the high-rank features are largely discarded and not learned in a neural network.” Any reference to support your statement?**
Studies in the literature [5,6,7] show that learning low-rank features can enhance robustness to noise in the input images. We will add these references to our paper to support the claim.
**3. How does the image dimensions (HW dim & C dim) matched when you use both datasets “ImageNet-1k and X-rays (0.5M)” for pretraining?**
When ImageNet-1k and X-rays (0.5M) are used for the pre-training of models in our paper, all the images will be reshaped to $224 \times 224 \times 3$ following the settings in [8].
**4. Table 2, how the decision cutting threshold was chosen?...**
We performed cross-validation described in line 248-261 to decide the cutting threshold for the rank in Table 2. Yes, we performed the same cross-validation process across different models/datasets.
**5. I have a different thought against the authors...**
We respectfully disagree with your thought that “by using the low-rank feature approach, the small region information might be lost”. We now demonstrate the evidence that the features of small disease regions are still captured well by our LRFL model. In particular, because ViT is known to be robust to diseases in a small size such as nodules [2], LRFL on a ViT model can still capture such small-sized disease areas. This is evidenced by the results in Average Precision (AP) about disease localization in point two of our response to Reviewer cPxj or point 3 of our global response. In these results, it is shown that LRFL model renders higher AP for small disease areas such as nodules for disease localization.
**6. ...I would see this is a similar implementation of the low-rank feature learning but very hard to observe valuable novelties.***
We respectfully but strongly disagree with the claim that “I would see this is a similar implementation of the low-rank feature learning”. The significant contributions of this paper have been missed in this claim.
While this paper uses the truncated nuclear norm (TNNR) for low-rank learning and TNNR has also been used for low-rank learning in the existing machine learning literature, the significance and novelty of the proposed LRFL method lies in the following two aspects, with significant advantages over the existing works.
**First, we propose a novel separable approximation to the TNNR, so that standard SGD can be used to efficiently optimize the training loss of LRFL with the TNNR**. The formulation of the separable approximation to the TNNR is described in Section 3.4 of our paper. The training algorithm with such novel separable approximation to the TNNR by SGD is detailed in Algorithm 1 of our paper. Results in Table 1-3 show that minimizing the training loss with the separable approximation to the TNNR significantly improves the performance of baseline models for disease classification.
To further verify the efficiency of our training algorithm compared to the existing optimization method for the TNNR, we compare the training time of our LRFL models with an existing method for optimizing the TNNR, TNNM-ALM [1], on NIH ChestX-ray14, CheXpert, COVIDx. The results in the table below show that our LRFL method achieves 7$\times$-10$\times$ acceleration in the training process on the three datasets, demonstrating the effectiveness and efficiency of the separable approximation to the TNNR proposed in our paper.
| Methods | NIH ChestX-ray14 (minutes) | CheXpert (minutes) | COVIDx (minutes) |
| :--------------: | :------: | :------: | :------: |
| ViT-S | 54 | 90 | 23 |
| ViT-S (TNNM-ALM) | 804 | 854 | 342 |
| ViT-S-LR | 98 | 117 | 38 |
| ViT-B | 72 | 162 | 32 |
| ViT-B (TNNM-ALM) | 915 | 1461 | 418 |
| ViT-B-LR | 113 | 185 | 45 |
Second, **we provide rigorous theoretical result justifying the proposed low-rank feature learning**. In particular, it is shown in Theorem 3.1 that the upper bound for the generalization error of the linear neural network in our framework involves the TNNR, and a smaller TNNR leads to a smaller generalization bound thus improves the generalization capability of the network.
**References**
[1] Lee et al. Computationally Efficient Truncated Nuclear Norm Minimization for High Dynamic Range Imaging. IEEE Transactions on Image Processing 2016.
[2] Xiao et al. Delving into masked autoencoders for multi-label thorax disease classification. WACV 2023.
[3] Hu, Yao, et al. "Large scale multi-class classification with truncated nuclear norm regularization." Neurocomputing 2015.
[4] Zhang, Fanlong, et al. "Truncated nuclear norm based low Rank Embedding." Biometric Recognition 2017
[5] Gao, Ming, et al. "Noise robustness low-rank learning algorithm for electroencephalogram signal classification." Frontiers in Neuroscience 2021.
[6] Lu, Yuwu, et al. "Low-rank preserving projections." IEEE transactions on cybernetics 2015.
[7] Ren, Jiahuan, et al. "Robust low-rank convolution network for image denoising." ACM MM 2022.
[8] Xiao, Junfei, et al. "Delving into masked autoencoders for multi-label thorax disease classification." WACV 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response as they have resolved partial of my concerns and questions. Here are my following questions:
3. LRFL for disease localization
Thanks the authors for conduct experimentations to resolve my concerns on tiny lesion detections. May I ask if this model performance for tiny lesions still holds for other dataset as well?
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback, and our further response
Comment: Since COVID-x does not have lesion diseases, we performed the same disease localization study for the ‘lung lesion’ disease on the CheXpert data. We manually labeled the ground-truth bounding boxes of 200 images in the class of ‘lung lesion', and the size of each bounding box is 224 which is the same as the class 'Nodule' in the ChestX-ray14 data. We show the improved Average Precision (AP) results for disease localization in the table below, where the mAP for disease localization is computed following the same settings as [2] (reference in the paper). It is observed from the results that our LRFL method improves the $AP_{25}$ and $AP_{50}$ for disease localization of ‘lung lesion' on the CheXpert data.
| Disease | Size (# of px) | ViT-S AP$_{25}$ | ViT-S AP$_{50}$ |ViT-S-LR AP$_{25}$ | ViT-S-LR AP$_{50}$ |
| :----------: | :------------: | :-----------: | :-----------: |:-----------: | :-----------: |
| Lung Lesion | 224 | 10.7 | 4.8 | 12.9 | 6.5 |
Please kindly let us know if we have addressed all your concerns. Thank you! | Summary: The paper is concerned with the problem of thorax disease classification in radiographic images. The authors propose a novel low-rank feature learning (LRFL) method which is applied on pre-trained masked autoencoders (MAE) and evaluated on two datasets (CheXpert and COVIDx). The authors also provide theoretical results on the generalization bound of the proposed approach. The authors show that their method outperforms multiple baselines on the two datasets.
Strengths: Overall, the topic of disease detection in radiographic images is of high importance and particularly improving robustness and generalization capabilities of models is a relevant research direction. The authors provide a good overview of the related work and perform many ablation studies and experiments to evaluate their method.
Weaknesses: **Limited evaluation**
- The authors motivate their method with an "adverse effect of noise and background, or non-disease areas, for disease classification on radiographic images." (L. 47-49). While it seems, that the method quantitatively outperforms multiple baselines, the authors do not provide any evidence that the proposed method is more robust to noise or background than the baselines. Also, the GradCAM visualizations in Fig. 4 are not convincing and GradCAM visualizations for many CNN-based methods (e.g., [90], Fig. 7; [2], Fig. 4) show much better localization capabilities than what is shown here for both the baseline and the proposed method. I admit that these may be different samples and visualizations are not directly comparable. Can the authors comment on this and also show visualizations using different architectures such as CNNs?
- "Unlike traditional methods, our approach introduces a separable approximation to the truncated nuclear norm, facilitating the optimization process and enhancing the generalization ability of the model, thus advancing the state-of-the-art in medical image analysis." (L. 134-137). I don't see any experiments that specifically evaluate the generalization ability of the model.
- L. 9-11: "To address this challenge, we propose a novel Low-Rank Feature Learning (LRFL) method in this paper, which is universally applicable to the training of all neural networks." - The authors apply their LRFL method only to four different architectures from one reference [2]. Whether the method universally improves the training of all neural networks is not shown in the manuscript.
**Unsupported claims**
The authors state several claims that are not supported by the literature or the experiments in the paper. Some examples:
- L. 41-43: "Clinical studies show that the disease areas on radiographic images can be subtle which exhibit localized variations, and such conditions are further complicated by the inevitable noise which is ubiquitously on radiographic images as detailed in Section 2.1" - Can the authors give examples where noise affects disease detection in the radiographic images of the datasets used in this study or provide references to support this claim?
- L. 56-57: "That is, the low-rank projection of the ground truth training class labels possesses the majority of the information of the training class labels. In fact, LFP widely holds for a broad range of classification problems using deep neural networks, such as [1, 8, 9]." - Can the authors provide a reference for this claim? None of the provided references [1,8,9] mentions low-rank features or a low frequency property at all.
- L. 14-15: "LFP not only widely exists in deep neural networks for generic machine learning [...]" - Similar to the previous point, can the authors provide a reference for this claim?
- L. 63-64: "As a result, the adverse effect of such noise and background is considerably reduced in a network trained by our LRFL method." - The authors do not perform any experiments which systematically evaluate the effect of noise or background on their method or the baseline methods (see previous point **Limited evaluation**)
**Confusing notation**
I find the mathematical notation in the paper to be inconsistent and confusing. Some examples:
- "Suppose the training data are given as $\\{\boldsymbol{x}_i, \boldsymbol{y}_i\\}\_{i=1}^n$ where $\boldsymbol{x}_i$ and $\boldsymbol{y}_i \in \mathbb{R}^C$ are the $i$-th training data point and its corresponding class label vector respectively, and $C$ is the number of classes. Each element $\boldsymbol{y}_i$ is binary with $\boldsymbol{y}_i = 1$ indicating the $i$-th disease is present in $\boldsymbol{x}_i$, otherwise $\boldsymbol{y}_i = 0$. The authors first denote with $\boldsymbol{y}_i$ a "$C$-dimensional class label vector" but then say that "$\boldsymbol{y}_i$ is binary with $\boldsymbol{y}_i = 1$ indicating the $i$-th disease present in $\boldsymbol{x}_i$". I would denote samples as $\boldsymbol{x}^{(i)}$ and class labels as $\boldsymbol{y}^{(i)}$ and then $\boldsymbol{y}^{(i)}_j$ indicates whether disease $j$ is present in sample $i$. Or use a matrix $\boldsymbol{Y}$ as introduced later in the manuscript.
- Eq. (1): If $\boldsymbol{W}\_1$ is also optimized during training, then it should be $f\_{\boldsymbol{W}\_1}(\boldsymbol{x})$ instead of $f\_{\boldsymbol{W}\_1(0)}(\boldsymbol{x})$.
- L. 171: "Using notations in Section 3.2, the truncated nuclear norm of $\boldsymbol{F}$ is $\lVert F\rVert := \sum_{i=T+1}^{d} \sigma_i$ where $T ∈ [0, d]$." In section 3.2, the authors introduce with $\sigma$ the element-wise sigmoid function, which is of course not what is used to compute the truncated nuclear form. Instead the authors use the singular values of the matrix $\boldsymbol{F}$ without introducing them. Also, the sum is indexed from $T+1$ to $d$ which doesn't make sense for $T=d$. The same index error is also made in Eq. 3.
Generally, these inconsistencies make it hard to follow the manuscript and mathematical derivations. I would recommend the authors to carefully revise the notation and make it consistent throughout the manuscript.
**Typos and other**
- L. 2, 29, 105, 245, ... "Visual Transformer (ViT)" -> "Vision Transformer (ViT)"
- L. 25-26: "[...] abnormalities detection in anatomy in chest X-rays" -> "[...] abnormalities detection in chest X-rays"
- L. 28: "Early works adopt convolutional neural networks (CNNs) such as U-Net [3] for representation learning on radiography images." - I am not sure how one would use a U-net for representation learning. U-nets are typically used for segmentation tasks or image-to-image mappings. Please clarify.
- L. 157: "denotes the denotes the weights" -> "denotes the weights"
Technical Quality: 2
Clarity: 3
Questions for Authors: - Table 12: Do the authors have an explanation why the optimal $\alpha$ is 1.0 for COVIDx but 0.2/0.5 for CheXpert? Also, did the authors conduct experiments with $\alpha > 1.0$?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors do not discuss the limitations of their method in the paper and answer the question 2 in the NeurIPS paper checklist with "[NA]" indicating that "the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper." I do not agree with this assessment and think that the authors should discuss the limitations of their method openly. Some limitations are e.g.,:
- The method is evaluated on two datasets only, both of which are concerned with thorax disease classification. Whether the method generalizes to other datasets remains unclear.
- The authors do not evaluate their method on any non-pretrained network and on few architectures only.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below.
**1. ...the authors do not provide any evidence that ...more robust to noise or background than the baselines...**
Please refer to the robust GradCAM results in our rebuttal PDF file using the suggested architecture (CNNs). Furthermore, we performed an additional ablation study showing that the proposed method is more robust to background than the baselines. In this study, we created a mask for the disease area for each original image, then decomposed the original image (with a bounding box for the disease) to a disease image and a background image. Both the disease image and the background image are of the same size as the original image, the background image has greyscale 0 in the masked disease area, and the disease image has greyscale 0 in the non-disease area. We fed the three images, which are the original image, the disease image, and the background image, to a LRFL model, and obtained the original features, disease features, and background features for the LRFL model respectively. We also fed these three images to a baseline model, and obtained the original features, disease features, and background features for the baseline model respectively. For each original image, we measure the distance between the disease features and original features using KL-divergence on the softmaxed features for the LRFL model and the baseline model. We then compute the average feature distance for each model, which is the average distance between the disease features and original features over the images with a ground-truth bounding box for the disease in the NIH dataset. **The average feature distance for the LRFL model is 0.5642, which is smaller than the average feature distance for the baseline model, 0.6628**. Such results indicate that the original features are closer to the disease features by the LRFL model compared to the baseline model, evidencing the effectiveness of the LRFL model in reducing the adverse effect of the background area. We also remark that since only the low-rank part of the original features participates in the classification process, the noise and non-disease areas in the high-rank part of the features are mostly not learned by LRFL, and in this manner, LRFL is robust to both noise and background.
**2. I don't see any experiments that specifically evaluate the generalization ability of the model.**
The improved generalization ability of a model in this work and the machine learning literature refers to its better prediction accuracy on the unseen test data, which has been shown in Table 1-3 of this paper.
**3. ...Whether the method universally improves the training of all neural networks...**
To show that LRFL universally improves the performance of different neural networks, we apply LRFL to four other networks for disease classification on NIH ChestX-ray14, namely Dira [5], Acpl [6], XProtoNet [7], and Swinchex [8]. We trained the LRFL models using the settings described in Section 4.1 of our paper. Results in the table below show that LRFL universally improves the performance of all the baseline models.
| Methods | mAUC |
| :--------------: | :------: |
| Dira [5] | 81.7 |
| **LR-Dira** | **82.5** |
| Acpl [6] | 81.8 |
| **LR-Acpl** | **82.3** |
| XProtoNet [7] | 82.2 |
| **LR-XProtoNet** | **82.7** |
| Swinchex [8] | 81.0 |
| **LR-Swinchex** | **81.8** |
**4. ...Can the authors give examples where noise affects...?**
Studies in the literature [9, 10] show that inevitable noise exists in radiographic images and can affect disease detection on them. We will add the references to our paper to support the claim.
**5. ...Can the authors provide a reference for this claim? ... LFP...can the authors provide a reference for this claim?**
LFP is commonly observed in various classification scenarios utilizing deep neural networks, and please refer to [1-4] for the claim about LFP.
**6. ...The authors do not perform any experiments...effect of noise or background...**
Please refer to point 1 of this rebuttal for the experiment showing that LRFL models reduce the adverse effect of background on the radiographic images for disease classification.
**Confusing notations/typos**
We will fix the confusing notations and typos following your suggestions. Moreover, U-Net can be used for representation learning [11, 12, 13]. In fact, segmentation tasks or image-to-image mappings are achieved by using the features/representations learned by U-Nets [11, 12, 13].
**Table 12...with $\alpha > 1$?**
Because the synthetic data contains noise as they are generated from a diffusion model using random noise as the input, adding more synthetic data does not always improve the prediction accuracy of our models and general deep neural networks [14, 15]. For example,
the experiments in [14] show that adding excessive synthetic images to the training set hurts the accuracy of image classification.
In our work, we use cross-validation to select the amount of synthetic data for training the model and find the corresponding $\alpha$ for each dataset. We performed an additional experiment and extended the candidate values of $\alpha$ which include $(1.5, 2,\ldots, 5)$ with an increment of 0.5, and obtained the same $\alpha$ values as those in Sec. C.3.
---
Rebuttal 2:
Title: More information about the rebuttal
Comment: (Cont'd)
**Limitation: ...The authors do not evaluate their method on any non-pretrained network and on few architectures only.**
We will discuss the suggested limitations of this paper. In addition, we conducted experiments comparing the performance of base models and low-rank models trained from scratch on NIH ChestX-ray14, CheXpert, and COVIDx without any pre-training. The results in the table below show that LRFL models still significantly improve the performance of the baseline models on all the datasets when trained from scratch.
| Methods | NIH ChestX-ray14 (mAUC) | CheXpert (mAUC) | COVIDx (Accuracy) |
| :--------------: | :------: | :------: | :------: |
| ViT-S | 66.55 | 81.97 | 78.00 |
| **ViT-S-LR** | **67.77** | **82.82** | **81.25** |
| ViT-B | 67.70 | 82.91 | 79.75 |
| **ViT-B-LR** | **68.63** | **83.76** | **82.55** |
**References**
References
[1] Rahaman et al. On the spectral bias of neural networks. ICML 2019.
[2] Arora et al. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. ICML 2019.
[3] Cao et al. Towards understanding the spectral bias of deep learning. IJCAI 2021.
[4] Choraria et al. The spectral bias of polynomial neural networks. ICLR 2022.
[5] Haghighi, Fatemeh, et al. "Dira: Discriminative, restorative, and adversarial learning for self-supervised medical image analysis." CVPR 2022.
[6] Liu, Fengbei, et al. "Acpl: Anti-curriculum pseudo-labelling for semi-supervised medical image classification." CVPR 2022.
[7] Kim, Eunji, et al. "XProtoNet: diagnosis in chest radiography with global and local explanations." CVPR 2021.
[8] Taslimi, Sina, et al. "Swinchex: Multi-label classification on chest x-ray images with transformers." arXiv preprint 2022.
[9] Goyal, Bhawna, et al "Noise issues prevailing in various types of medical images." Biomedical & Pharmacology Journal 2018.
[10] Hussain, Dildar, et al. "Exploring the Impact of Noise and Image Quality on Deep Learning Performance in DXA Images." Diagnostics 2024.
[11] Ronneberger, Olaf, et al. "U-net: Convolutional networks for biomedical image segmentation." MICCAI 2015.
[12] Wu, Kai, et al. "Weakly supervised brain lesion segmentation via attentional representation learning." MICCAI 2019.
[13] Weng, Yu, et al. "Nas-unet: Neural architecture search for medical image segmentation." IEEE access 2019.
[14] Azizi, Shekoofeh, et al. "Synthetic data from diffusion models improves imagenet classification." TMLR 2023.
[15] He, Ruifei, et al. "Is synthetic data from generative models ready for image recognition?." CVPR 2023.
---
Rebuttal Comment 2.1:
Comment: I thank the reviewers for providing a rebuttal and addressing my concerns and questions. Particularly the newly conducted experiment on robustness to background is very much appreciated and I recommend to include it in the main paper. I am still sceptic about the synthetic training data part of the paper. Can the authors better motivate this and how it relates to the rest of the paper?
---
Rebuttal 3:
Title: Thank you for your feedback, and our response to the motivation of synthetic training data
Comment: Thank you for your feedback. The motivation for synthetic images and how the usage of synthetic images relates to this paper are explained below.
The computer vision literature [1,2,3] has extensively studied the usage of the generated synthetic images which augment the training data and improve the prediction accuracy of image classification. Inspired and motivated by this observation, we propose to generate synthetic images and use them to form the augmented training data and improve the performance of thorax disease classification. The augmented training data comprise the original training images and the synthetic images. However, too many synthetic images tend to introduce more noise to the augmented training data so excessive synthetic images can hurt the prediction accuracy of DNNs trained on the augmented training data [1]. Our proposed low-rank feature learning (LRFL) method coupled with the selection of the amount of the synthetic images effectively mitigate this issue. The proposed low-rank learning method only learns the low-rank part of the features learned by a deep learning model so that noise in the high-rank part would not affect the learned model. Also, cross-validation (described in line 248-261) is used to select a proper number of synthetic images what will boost the prediction accuracy while not introducing too much noise to the augmented training data.
**References**
[1] Azizi et al. "Synthetic data from diffusion models improves imagenet classification." TMLR 2023.
[2] He et al. "Is synthetic data from generative models ready for image recognition?." CVPR 2023.
[3] Trabucco, Brandon, et al. "Effective data augmentation with diffusion models." ICLR 2024.
---
Rebuttal Comment 3.1:
Title: Thank you for the additional information
Comment: Thanks for your prompt response, this is very much appreciated!
> However, too many synthetic images tend to introduce more noise to the augmented training data so excessive synthetic images can hurt the prediction accuracy of DNNs trained on the augmented training data [1]. Our proposed low-rank feature learning (LRFL) method coupled with the selection of the amount of the synthetic images effectively mitigate this issue. The proposed low-rank learning method only learns the low-rank part of the features learned by a deep learning model so that noise in the high-rank part would not affect the learned model.
But from Table 3, I cannot see any indication that the low-rank model can better leverage the synthetic data compared to the baseline method, as you claimed. There we can see that ViT-S improves by 1.8%, whereas ViT-S-LR only improved by 0.5%. Similarly, ViT-B improved by 1.7%, whereas ViT-B-LR improved only by 0.5%.
Therefore, I don't think that the inclusion of synthetic data part in the paper is well motivated and I find neither theoretical, nor empirical support for the claim that the proposed method can better leverage this synthetic data for thorax disease classification.
---
Reply to Comment 3.1.1:
Title: Thank you for your feedback, and our further response
Comment: We herein provide a detailed justification about our claim "Our proposed low-rank feature learning (LRFL) method coupled with the selection of the amount of the synthetic images effectively mitigate this issue of noise in the synthetic training images".
We respectfully remind this reviewer that the improvement of our LRFL models (ViT-S-LR or ViT-B-LR) over the base model (ViT-S or ViT-B) varies in terms of the size of the synthetic data (# Synthetic Images in Table 3). A proper number of synthetic training images often improves the prediction accuracy of general DNNs [1], including both a LRFL model and the corresponding base model. As a result, the effectiveness of our low rank feature learning (LRFL) method in reducing the adverse effect of noise in the synthetic data should not be studied from only the final selected size of synthetic data by cross-validation as shown in Table 3. To this end, we show in the table below the performance of our LRFL models and the base models on different choices of the size of the synthetic data.
Table here (rows for the models and columns for synthetic data size, add more synthetic data size until 1*n)
| Synthetic Data Size | 0 | 0.1 $n$ | 0.15 $n$ | 0.2 $n$ | 0.25 $n$ | 0.3 $n$ | 0.35 $n$ | 0.4 $n$ | 0.45 $n$ | 0.5 $n$ | 0.6 $n$ | 0.7 $n$ | 0.8 $n$ | 0.9 $n$ | 1 $n$ |
| :----------: | :--: | :-----: | :------: | :-----: | :------: | :-----: | :------: | :-----: | :------: | :-----: | :-----: | :-----: | :-----: | :-----: | :---: |
| ViT-S | 89.2 | 89.2 | 89.3 | 89.2 | 89.3 | 89.1 | 89.0 | 88.8 | 88.6 | 88.2 | 88.0 | 87.8 | 87.2 | 87.0 | 87.1 |
| ViT-S-LR | 89.6 | 89.6 | 89.6 | 89.7 | 89.7 | 89.7 | 89.7 | 89.6 | 89.7 | 89.6 | 89.5 | 89.4 | 89.3 | 89.2 | 89.3 |
| ViT-B | 89.3 | 89.5 | 89.7 | 89.8 | 89.9 | 89.6 | 89.1 | 88.9 | 88.7 | 88.5 | 88.4 | 88.2 | 87.8 | 87.6 | 87.6 |
| ViT-B-LR | 89.8 | 90.0 | 90.2 | 90.3 | 90.4 | 90.4 | 90.4 | 90.3 | 90.4 | 90.2 | 90.4 | 90.4 | 90.2 | 90.0 | 90.1 |
It can be observed from this table that the performance of both LRFL model and the base model can be initially improved with more synthetic images. However, after a certain point, even more synthetic images start to hurt the performance due to the noise in the synthetic images, and the literature on using synthetic data for training classifiers such as [1] also has a similar observation. This is the reason why we need to perform a cross-validation on the size of the synthetic data for the best performance. **Importantly, it can be observed that our LRFL models (ViT-S-LR or ViT-B-LR) usually improve the performance of the corresponding base models (ViT-S or ViT-B) on different choices of the size of the synthetic data. The improvement of our LRFL models over the corresponding base models tends to be more significant as the size of synthetic data increases. This observation justifies the effectiveness of LRFL in reducing the adverse effect of noise in the synthetic images**. For example, ViT-B-LR outperforms ViT-B by $0.5$% in mAUC when $0.1n$ synthetic images are added into the training set, and the improvement escalates to $2.5$% with $n$ synthetic images added into the training set where $n$ the size of the original training data. We used cross-validation to find the best size of synthetic data to achieve the best performance (mAUC or Accuracy) for our LRFL models in Table 3.
**We also emphasize that our LRFL method is theoretically motivated by Theorem 3.1 which also applies to the augmented training data including the synthetic images**. In Theorem 3.1, the upper bound for the generalization error of the linear neural network in our framework involves the truncated nuclear norm (TNNR) , and a smaller TNNR leads to a smaller generalization bound thus improves the generalization capability of the network.
**References**
[1] Azizi et al. "Synthetic data from diffusion models improves imagenet classification." TMLR 2023.
---
Rebuttal 4:
Title: Thank you again for your feedback, and we look forward to the adjusted rating of this paper
Comment: Dear Reviewer td2a,
We really appreciate your time giving feedback to our rebuttal. Since we have addressed all your concerns and we will add your suggested changes/discussions to the final version of this paper, could you update the rating of this paper based on all of our responses? Please kindly let us know if you have further comments/concerns/suggestions and we will respond to them immediately. Thank you again for your time!
Best Regards,
The Authors
---
Rebuttal Comment 4.1:
Title: Thank you!
Comment: Thank you for your response and the additional details. I will increase my score.
---
Reply to Comment 4.1.1:
Title: Thank you, and a gentle reminder for updating the rating of this paper
Comment: Dear Reviewer td2a,
Thank you for your response and being willing to increase the rating of this paper. It is about half an hour before the end of the discussion period (August 13 AoE), and this is a gentle reminder that we are still waiting for your updated rating. Thank you!
Best Regards,
The Authors | Summary: The paper introduces LRFL, a method for reducing the effect of noise and background or non-disease areas in radiograph images for Thorax Disease Classification. LRFL utilizes low-rank regularization to leverage low-rank features during network training.
Strengths: 1-The motivation for reducing the adverse of the noise and background to learn better features is interesting and reasonable.
2-Extensive experiments have shown that the proposed method improves performance over prior STOA approaches on different thorax disease datasets.
3- The content flow of the paper makes it easy for readers to grasp the presented information.
Weaknesses: 1-The author claims that their approach could be applied to classify other diseases beyond thorax diseases or even general classification problems with radiographic images without conducting any experiments on other datasets.
2-The approach's evaluation is only done using mAUC, while the baseline uses IoU with average precision.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1- Why did the author not use another radiograph to prove his/her claim that their approach can be used broader with any classification problems in radiographic images?
2- Why did the author not add another evaluation method for their approach, likewise the baseline [2] Table 7?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1- We recommend the author test their approach with another radiograph dataset such as knee disease to prove their claim.
2-Table 4 in P.17 illustrates a close performance (0.4%) between the baseline [2] and the proposed method. Therefore, we encourage the author to include another evaluation norm (i.e., disease localization) and compute the Average precision (i.e., AP25 and AP50) between the ground truth and predicted bounding box to improve their work. Although the author compares his/her approach with the baseline in Figure 5 using Grad-CAM visualization, it is better to include a table comparing the approaches rather than pick random images from the Grad-CAM visualization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below.
**1. Why did the author not use another radiograph to prove his/her claim that their approach can be used broader with any classification problems in radiographic images?**
Thank you for your suggestion! Because this paper focuses on the application of the proposed low-rank feature learning framework to thorax disease classification, we do not put the results for classification problems for other diseases such as knee disease due to the page limit. However, we will present such results for more types of diseases in the final version of this paper.
**2. Why did the author not add another evaluation method for their approach, likewise the baseline [2] Table 7?**
We show the improved Average Precision (AP) results for disease localization in the table below, where the AP for disease localization is computed following the same settings as [2] (reference in the paper). The experiments are done on a subset of ChestX-ray14 which offers 787 cases with bounding-box of a total of eight thorax diseases. It is observed from the results below that our LRFL model improves the $AP_{25}$ and $AP_{50}$ for disease localization by 1.1 and 1.2 respectively.
| Disease | Size (# of px) | ViT-S AP$_{25}$ | ViT-S AP$_{50}$ |ViT-S-LR AP$_{25}$ | ViT-S-LR AP$_{50}$ |
| :----------: | :------------: | :-----------: | :-----------: |:-----------: | :-----------: |
| Nodule | 224 | 9.2 | 3.9 | 11.7 | 5.1 |
| Mass | 756 | 27.0 | 11.1 | 29.3 | 12.2 |
| Atelectasis | 924 | 31.5 | 8.1 | 34.2 | 9.6 |
| Pneumothorax | 1899 | 4.7 | 0.0 | 6.2 | 1.7 |
| Infiltrate | 2754 | 11.4 | 1.3 | 12.9 | 1.9 |
| Effusion | 2925 | 8.8 | 1.0 | 10.2 | 2.0 |
| Pneumonia | 2944 | 27.8 | 9.3 | 29.6 | 10.2 |
| Cardiomegaly | 8670 | 16.3 | 3.0 | 18.8 | 4.2 |
| All | 2300 | 18.0 | 4.7 | 19.1 | 5.9 |
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a response. The newly added discussions and experiment results have addressed my concerns. | Summary: This paper introduces a novel Low-Rank Feature Learning (LRFL) method to effectively reduce noise and non-disease areas in radiographic images, enhancing disease classification. The LRFL method, which is theoretically and empirically motivated, demonstrates superior classification performance compared to state-of-the-art methods when applied to pre-trained neural networks, improving both multi-class AUC and classification accuracy.
Strengths: - thorax disease classification is not an easy problem, motivation is high, and significance is solid.
- low rank feature learning is proposed, which maybe applicable to all kinds of neural networks for disease classification (thorax).
- three large scale x-ray data are used, and good results were obtained.
- sharp generalization bound analysis is solid
Weaknesses: - LRFL is based on LFP, and truncated nuclear norm is added as a regularization term. Nothing more. In this sense, there are so many similar methods with different regularizations.
- training diffusion algorithms to generate synthetic images (xray) is already done by many..why authors propose this as a novelty ?
- introduction about radiographic images is odd...too simple and already known
- section 2.2 stands out of nowhere...very broad without specific information related to work.
- section 2.3 can be longer, that is the main part and motivation but kept short and simple. Put a picture to highlight.
- figure 2 is useless.
- Gradcam is noisy, better to use gification and other methods to show the localizations, or robust version of grad cam. there are many works showing grad cam is not a suitable method.
- comparisons are weak, there are many advanced versions of algorithms there, even with eye tracking supported classification results for the same data available. SOTA is not updated.
- discussion is missing
Technical Quality: 3
Clarity: 2
Questions for Authors: weakness above are self-descriptive and including questions.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - lack of novelty
- lack of enough and valid comparisons
- experimental results are not convincing
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. LRFL is based on LFP, and truncated nuclear norm is added as a regularization term. Nothing more. In this sense, there are so many similar methods with different regularizations.**
We respectfully but strongly disagree with this claim since the significant contributions of this paper are missed in this claim.
While this paper uses the truncated nuclear norm (TNNR) for low-rank learning and TNNR has also been used for low-rank learning in the existing machine learning literature, the significance and novelty of the proposed LRFL method lies in the following two aspects, with significant advantages over the existing works.
**First, we propose a novel separable approximation to the TNNR, so that standard SGD can be used to efficiently optimize the training loss of LRFL with the TNNR**. The formulation of the separable approximation to the TNNR is described in Section 3.4 of our paper. The training algorithm with such novel separable approximation to the TNNR by SGD is detailed in Algorithm 1 of our paper. Results in Table 1-3 show that minimizing the training loss with the separable approximation to the TNNR significantly improves the performance of baseline models for disease classification.
To further verify the efficiency of our training algorithm compared to the existing optimization method for the TNNR, we compare the training time of our LRFL models with an existing method for optimizing the TNNR, TNNM-ALM [1], on NIH ChestX-ray14, CheXpert, COVIDx. The results in the table below show that our LRFL method achieves 7$\times$-10$\times$ acceleration in the training process on the three datasets, demonstrating the effectiveness and efficiency of the separable approximation to the TNNR proposed in our paper.
| Methods | NIH ChestX-ray14 (minutes) | CheXpert (minutes) | COVIDx (minutes) |
| :--------------: | :------: | :------: | :------: |
| ViT-S | 54 | 90 | 23 |
| ViT-S (TNNM-ALM) | 804 | 854 | 342 |
| ViT-S-LR | 98 | 117 | 38 |
| ViT-B | 72 | 162 | 32 |
| ViT-B (TNNM-ALM) | 915 | 1461 | 418 |
| ViT-B-LR | 113 | 185 | 45 |
**Second, we provide rigorous theoretical result justifying the proposed low-rank feature learning**. In particular, it is shown in Theorem 3.1 that the upper bound for the generalization error of the linear neural network in our framework involves the TNNR, and a smaller TNNR leads to a smaller generalization bound thus improves the generalization capability of the network.
**2. training diffusion algorithms to generate synthetic images (xray) is already done...**
To the best of our knowledge, this work is among the first to use synthetic images to boost the performance of DNNs on thorax disease classification tasks coupled with the proposed efficient low-rank feature learning method.
**3. introduction about radiographic images is odd...too simple and already known**
We will simplify such an introduction to radiographic imaging in the final version of this paper.
**4. section 2.2 stands out of nowhere...very broad without specific information related to work.**
We respectfully disagree with this factually wrong claim. Section 2.2 covers important works in using DNNs for medical imaging tasks including thorax disease classification. Importantly, the MAE method [1] introduced in Section 2.2 is the important pre-training method used in medical imaging for thorax disease classification, which is also adopted as the pre-training method in this work.
**5. section 2.3 can be longer, that is the main part and motivation but kept short and simple. Put a picture to highlight.**
We will elaborate existing low-rank learning methods with more details, and add a figure similar to Figure 1 to Section 2.3.
**6. figure 2 is useless.**
We will change Figure 2 to a text description in Sec. 3.1.
**7. Gradcam is noisy, better to use gification and other methods to show the localizations, or robust version of grad cam. there are many works showing grad cam is not a suitable method.**
Please refer to the robust GradCAM results in our rebuttal PDF file in our global response.
**8. comparisons are weak, there are many advanced versions of algorithms there, even with eye tracking supported classification results for the same data available. SOTA is not updated. discussion is missing**
We have already incorporated the most recent SOTA results in [1] published in 2023 for thorax disease classification on the same datasets as that in this paper, and our LRFL models render significantly better results than the current SOTA [1] and other competing baselines. We also provide detailed discussions about our results in Section 4, and the experimental result for each dataset has a paragraph for discussion titled "Results and Analysis" or "Results". Eye tracking supported classification methods are not in the scope of this work and the relevant rich literature, because this work and the relevant rich literature such as [1] and those reviewed in Section 2.2 are using DNNs for automatic thorax disease classification or general medical imaging tasks without eye tracking information.
**References**
[1] Xiao et al. Delving into masked autoencoders for multi-label thorax disease classification. WACV 2023.
---
Rebuttal Comment 1.1:
Title: improved manuscript
Comment: thank you for the rebuttal, some of the questions were handled good.
The paper is improved, but overall I do not see an innovation at the high level, really, respectfully. Regularization (different kind) are highly visited topic at low rank representation, and this very particular situation can be exception but the scope is narrow then.
When you say that this is the first time in the literature, and then include this only for thorax cases, or some other medical imaging application based, the only thing is happening is to narrowing down the innovation into a certain application level. This makes it new application perhaps but not an entirely innovative method to be considered at certain venues such as NeurIPS or ICLR or less competitive places like IEEE ISBI etc.
Nevertheless, I tend to increase my scores according to new experiments and some clarifications and promises that authors are making to remove some parts, add some other parts and etc.
---
Rebuttal 2:
Title: We respectfully and strongly disagree with the concern about the regularization and the novelty of this paper
Comment: 1. We respectfully and strongly disagree that "...not see an innovation at the high level, really, respectfully. Regularization (different kind) are highly visited topic at low rank representation, and this very particular situation can be exception but the scope is narrow then." The argument is weak and based on a problematic logic: the fact that regularization is widely studied topic does not justify the claim that there is no novelty in this paper. As emphasized in our rebuttal, we propose **a novel and separable approximation to the TNNR, so that standard SGD can be used to efficiently optimize the training loss of LRFL with the TNNR; the proposed LRFL method also enjoys rigorous and sharp theoretical guarantee as shown in Theorem 3.1.**
2. We respectfully and strongly disagree that "...this very particular situation can be exception but the scope is narrow then." As discussed in the introduction section of this paper and all the other reviewers, **the proposed LRFL is applicable to general DNNs, so the application scope of LRFL is rather broad.** In this paper, we demonstrate the application of LRFL to thorax disease classification, which is an important medical imaging and healthcare area where deep learning methods are used for disease classification.
3. We respectfully and strongly disagree that "When you say that this is the first time in the literature, and then include this only for thorax cases, or some other medical imaging application based, the only thing is happening is to narrowing down the innovation into a certain application level. " **The innovation of the novel and separable approximation to the TNNR and the theoretical guarantee of LRFL shown in Theorem 3.1 is never narrowed to medial imaging, and as mentioned in point 2 above, such innovation is applicable to general DNNs for image classification tasks**.
**Overall, we hope the reviewer evaluates the novelty this paper which we emphasized in the rebuttal and the above explanations.** Again, **claiming that the proposed LRFL is not novel based on the facts that the LRFL is formulated as a regularization method and regularization is a widely visited area is indeed problematic and questionable. Moreover, the scope of the innovation of LRFL is never limited to thorax disease classification or even medical imaging, because LRFL is generally applicable to all DNNs**. Our regularization in the proposed LRFL is novel and significantly different from the existing literature in low-rank learning with regularization, as described in the rebuttal and our explanation above.
We look forward to a justified and reasonable evaluation of this paper. Thank you!
---
Rebuttal Comment 2.1:
Title: updated score
Comment: thank you for the further clarifications, scores were updated/to be updated.
---
Rebuttal 3:
Title: Thank you for your prompt response, and we look forward to further updated rating
Comment: Thank you for your prompt response confirming our further clarifications and mentioning that "...scores were updated/**to be updated**". We really look forward to your further update of your rating for this paper based on our further clarifications. Please kindly let us know if you have more comments/suggestions and we will respond to them immediately. Thank you for your time!
Best Regards,
The Authors | Rebuttal 1:
Rebuttal: We appreciate the review and the suggestions in this review. We have posted our response to individual reviews addressing all the raised concerns. Here we provide global responses itemized below.
**1. Significance and novelty of this paper**
While this paper uses the truncated nuclear norm (TNNR) for low-rank learning and TNNR has also been used for low-rank learning in the existing machine learning literature, the significance and novelty of the proposed LRFL method lies in the following two aspects, with significant advantages over the existing works.
**First, we propose a novel separable approximation to the TNNR, so that standard SGD can be used to efficiently optimize the training loss of LRFL with the TNNR**. The formulation of the separable approximation to the TNNR is described in Section 3.4 of our paper. The training algorithm with such novel separable approximation to the TNNR by SGD is detailed in Algorithm 1 of our paper. Results in Table 1-3 show that minimizing the training loss with the separable approximation to the TNNR significantly improves the performance of baseline models for disease classification.
To further verify the efficiency of our training algorithm compared to the existing optimization method for the TNNR, we compare the training time of our LRFL models with an existing method for optimizing the TNNR, TNNM-ALM [1], on NIH ChestX-ray14, CheXpert, COVIDx. The results in the table below show that our LRFL method achieves 7$\times$-10$\times$ acceleration in the training process on the three datasets, demonstrating the effectiveness and efficiency of the separable approximation to the TNNR proposed in our paper.
| Methods | NIH ChestX-ray14 (minutes) | CheXpert (minutes) | COVIDx (minutes) |
| :--------------: | :------: | :------: | :------: |
| ViT-S | 54 | 90 | 23 |
| ViT-S (TNNM-ALM) | 804 | 854 | 342 |
| ViT-S-LR | 98 | 117 | 38 |
| ViT-B | 72 | 162 | 32 |
| ViT-B (TNNM-ALM) | 915 | 1461 | 418 |
| ViT-B-LR | 113 | 185 | 45 |
**Second, we provide rigorous theoretical result justifying the proposed low-rank feature learning**. In particular, it is shown in Theorem 3.1 that the upper bound for the generalization error of the linear neural network in our framework involves the TNNR, and a smaller TNNR leads to a smaller generalization bound thus improves the generalization capability of the network.
We would also like to remind the reviewers that this work is among the first to effectively use synthetic data generated by a diffusion model and a low-rank feature learning model to achieve the state-of-the-art accuracy for thorax disease classification, which is an important research problem in the medical imaging domain.
**2. Robust Grad-CAM visualization results**
Following the suggestions in the reviews, we illustrate the robust Grad-CAM [1] visualization results in the attached rebuttal PDF file.
**3. LRFL for disease localization**
We show the improved Average Precision (AP) results for disease localization in the table below, where the AP for disease localization is computed following the same settings as [2]. The experiments are done on a subset of ChestX-ray14 which offers 787 cases with
bounding-box of a total of eight thorax diseases. It is observed from the results below that our LRFL model improves the $AP_{25}$ and $AP_{50}$ for disease localization by 1.1 and 1.2 respectively.
| Disease | Size (# of px) | ViT-S AP$_{25}$ | ViT-S AP$_{50}$ |ViT-S-LR AP$_{25}$ | ViT-S-LR AP$_{50}$ |
| :----------: | :------------: | :-----------: | :-----------: |:-----------: | :-----------: |
| Nodule | 224 | 9.2 | 3.9 | 11.7 | 5.1 |
| Mass | 756 | 27.0 | 11.1 | 29.3 | 12.2 |
| Atelectasis | 924 | 31.5 | 8.1 | 34.2 | 9.6 |
| Pneumothorax | 1899 | 4.7 | 0.0 | 6.2 | 1.7 |
| Infiltrate | 2754 | 11.4 | 1.3 | 12.9 | 1.9 |
| Effusion | 2925 | 8.8 | 1.0 | 10.2 | 2.0 |
| Pneumonia | 2944 | 27.8 | 9.3 | 29.6 | 10.2 |
| Cardiomegaly | 8670 | 16.3 | 3.0 | 18.8 | 4.2 |
| All | 2300 | 18.0 | 4.7 | 19.1 | 5.9 |
**References**
[1] Selvaraju, Ramprasaath R., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." ICCV 2017.
[2] Xiao et al. Delving into masked autoencoders for multi-label thorax disease classification. WACV 2023.
Pdf: /pdf/7638fe42ec80199f8153ccbbd56ad22e9db88e80.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Active Set Ordering | Accept (poster) | Summary: This submission proposes a novel Mean Prediction (MP) method for active set ordering problem. MP selects pre-defined k inputs with highest Gaussian Process posterior mean values through a novel sampling strategy. Theoretical analysis on the regret, the prediction and the sampling strategy of proposed method is discussed in details, and experiments on synthetic functions and real-life application prove the effectiveness of MP for ordering sets of inputs based on expensive evaluation on black-box function. The submission includes GP-UCB Bayesian optimization as a special case of active set ordering problem.
Strengths: 1. The paper presents adequate theoretical justification for their proposed method. The motivation for studying the active set ordering problem is clearly stated and the proofs on regret bound, prediction and sampling strategy seem valid to me.
2. The idea of formulating black-box function optimization into set ordering problem presents novelty to some extend. It provides some alternative sampling strategy by proposing the input set $Q_t$ in lines 145-146, which can be more informative than traditional metric such as sampling based on maximizing GP variance.
Weaknesses: 1. My first concern regards the dimensionality of the problem setting. The theoretical justification of MP seems valid for any value of $d$ where $d$ is the dimension of the black-box function. However, in the Experiments section all the problems are 2-dimensional, which is insufficient to show how practical the proposed MP method could be in applications.
2. This sampling-based method only works on discrete space problems to me. The authors use classic GP model on continuous space as the surrogate model to compute posterior mean, but then sample on the discretized space during inference. This could be one of the major limitations of MP method because a lot of probabilistic information has been missed during the discretization.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I'm not very convinced by the experiment design. For example the experiment on Goldstein-Price function (lines 248-251), the input domain is discretized into 100 points and a top-5 set based on posterior mean is sampled at each of the 100 iterations. Even repeated sampling is allowed in this paper, I still feel like almost the whole space which only consists 100 points has been sampled by doing so. How do the authors justify the efficiency compared to brute force sampling over the whole sample space?
2. In all experiment settings, the 100/400 points that represent the input domain $\mathbf{X}$ are randomly sampled from normalized domain $[0, 1]^2$. If I understand correctly, each $\mathbf{X}$ of the 15 repeated experiments is different, which lead to 15 different $S_{\mu_t}(5)$ at the end of iterations. How do the authors decide which one of the $S_{\mu_t}(5)$ is the true top-5 set for the objective black-box problem? (e.g. which 5 locations are the top-5 with highest $NO_3$ concentration in Lake Zurich?)
3. Could the authors explain how $S_{\mu_t}(k)$ is constructed in line 3 of Algorithm 1? Are the posterior mean of all the points in $\mathbf{X}$ evaluated? Similar for $(\mathbf{\bar{x}_t}, \mathbf{\bar{x}'_t})$ (line 4), are all possible pairs in $S_{\mu_t}(k)$ evaluated?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See above for limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our paper and for acknowledging the theoretical justification, motivation, and sampling strategy. We will now address the remaining concerns as follows.
> 1. My first concern regards the dimensionality of the problem setting.
We have conducted additional experiments using the Hartmann-6D function to demonstrate the empirical performance of our algorithm in higher-dimensional spaces. Specifically, the input domain consists of $1000$ points, and the input dimension is $6$.
The results are shown in the attached PDF of the global response.
They show that our methods consistently outperform other baselines in identifying the top-$50$ set (illustrated in Figure 1b) and in simultaneously finding the maximizer, the top-$100$, and the top-$200$ sets (an active multiple set ordering problem, as illustrated in Figure 1c).
Furthermore, we would like to highlight the practical applications of our problem. As discussed in the introduction, one such application is environmental monitoring, where the input domain is typically two-dimensional, like a geographical area. This focus aligns with several experiments based on real-world datasets presented in our paper.
> 2. This sampling-based method only works on discrete space problems to me... This could be one of the major limitations of MP method because a lot of probabilistic information has been missed during the discretization.
Our assumption is that the input domain is a discrete space, and we use an exact Gaussian Process (GP) model to represent the black-box function. According to the marginalization property of GPs, the function values at any finite subset of inputs (including the discrete input domain) follow a multivariate Gaussian distribution. Since we focus solely on function evaluations within this discrete domain and our observations are acquired exclusively from this discrete domain, we believe that no probabilistic information has been lost.
Therefore, we would appreciate any guidance the reviewer can provide regarding the probabilistic information that might have been overlooked.
> 1. I'm not very convinced by the experiment design... How do the authors justify the efficiency compared to brute force sampling over the whole sample space?
Thank you for your careful observation regarding the Goldstein-Price experiment (lines 248-251).
We have revised the experiment by replacing the Rand and Var baselines with RandNoRepl and VarNoRepl, which do not allow repeated sampling. The results are included in the attached PDF file of the global response. This modification potentially gives RandNoRepl (random sampling without replacement across different iterations) and VarNoRepl (uncertainty sampling without replacement across different iterations) an additional advantage over our solutions, which allow repeated sampling. By avoiding repeated sampling, RandNoRepl and VarNoRepl can sample the input domain more uniformly, whereas our methods might re-sample certain input regions.
However, as shown in Figure 1a of the attached PDF, RandNoRepl and VarNoRepl still do not outperform our solutions.
The justification for the efficiency of our solutions is in the nature of noisy observations: With a noise standard deviation of $\sigma_n = 0.1$, a single observation at each input may not suffice to accurately determine the ordering with its neighboring inputs in terms of the function value. Hence, spreading the sampling budget across the whole input domain may not perform well. In contrast, our approach allocates more sampling inputs to the boundary of the top-$k$ set, where it is particularly challenging to check if inputs belong to the top-$k$ set.
This rationale is also evident in the animation video included in the supplementary materials. Although the uncertainty sampling in the video allows repeated sampling, it distributes samples across the input domain quite evenly (as its goal is to reduce uncertainty throughout the entire input domain).
> 2. How do the authors decide which one of the $S_{\mu_t}(5)$ is the true top-5 set for the objective black-box problem?
For each repeated experiment, the set $\\mathcal{S}\_{\\mu\_t}(5)$ and the true top-5 set $\\mathcal{S}(5)$ are computed independently from those in other experiments. The regret for each experiment is calculated using only its own $\\mathcal{S}\_{\\mu\_t}(5)$ and $\\mathcal{S}(5)$. Therefore, it is unnecessary to identify a single set $\\mathcal{S}(5)$ for all repeated experiments, as the regret for each one is computed independently. The plot displays the average and standard error of the regrets across all repeated experiments.
> 3. Could the authors explain how $S_{\mu_t}(k)$ is constructed in line 3 of Algorithm 1?
To construct $\mathcal{S}_{\mu_t}(k)$, the posterior mean values of all inputs in $\mathcal{X}$ are evaluated in $\mathcal{O}(n m_t^2)$ time (time complexity for GP prediction), where $m_t$ is the number of observations at iteration $t$. Subsequently, it takes $\mathcal{O}(n \log k)$ time to identify the top $k$ inputs by using a max heap of size $k$ and scanning through the GP posterior mean of all inputs. It is worth noting that evaluating the posterior mean (and variance) of all inputs in $\mathcal{X}$ is often necessary when $\mathcal{X}$ is finite, as demonstrated in [8] and in cases of finite input domains described in [1,18].
To find $(\\bar{\\mathbf{x}}\_t, \\bar{\\mathbf{x}}'\_t)$ (Equation 11), we iterate through elements in $\\mathcal{S}\_{\\mu\_t}(k) \\times \\mathcal{S}^c\_{\\mu\_t}(k)$, which takes $\\mathcal{O}(k (n - k))$ time (linear in $n$).
---
Thank you for patiently reading our response. We sincerely hope that the above clarifications address your concerns regarding the algorithm and our experimental results, and hence, improving your opinion on our paper. We will thoroughly incorporate your valuable feedback, along with the additional experiments, into the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response. RandNoRepl and VarNoRepl experiments are interesting and they do perform well given 100 iterations as I expected. By performing the additional comparison to them I'm more convinced by seeing how the proposed MP method close up the regret in fewer iterations for the Goldstein-Price case.
This is an interesting idea in general. I'll raise my score to 5.
If the authors have time, I'm interested in a comparison between RandNoRepl, VarNoRepl and MP methods on case where both MP and other baselines took all iterations budget to close up the regret in original setting. (e.g. compare RandNoRepl, VarNoRepl and MP methods on Branin-Hoo S(5) with 100 iterations)
---
Reply to Comment 1.1.1:
Title: Thank You for Your Reconsideration and Score Enhancement
Comment: Thank you very much for reviewing our response and for appreciating our additional experimental results, as well as for improving the score. We are glad to hear that our new baseline designs align with your suggestions. If time allows, we hope to address any remaining questions. Accordingly, we have conducted experiments with RandNoRepl and VarNoRepl on Branin-Hoo $\mathcal{S}(5)$ with $100$ iterations.
The results are presented below (together with the results of Rand, Var, and MP in the plot in the paper for reference).
| Iteration | 20 | 40 | 60 | 80 | 100 |
|-------------|------------|------------|------------|------------|------------|
| Rand | 4.80+-1.49 | 1.11+-0.23 | 0.34+-0.07 | 0.25+-0.06 | 0.32+-0.04 |
| RandNoRepl | 1.68+-0.39 | 0.58+-0.14 | 0.43+-0.08 | 0.32+-0.05 | 0.20+-0.03 |
| Var | 0.64+-0.11 | 0.34+-0.07 | 0.34+-0.07 | 0.23+-0.05 | 0.23+-0.03 |
| VarNoRepl | 0.64+-0.11 | 0.37+-0.07 | 0.39+-0.10 | 0.29+-0.08 | 0.29+-0.08 |
| $\bar{\mathbf{x}}_t \wedge \bar{\mathbf{x}}_t'$ | 0.46+-0.07 | 0.29+-0.06 | 0.17+-0.05 | **0.07+-0.02** | 0.05+-0.02 |
| $\bar{\mathbf{x}}_t \vee \bar{\mathbf{x}}_t'$ | 0.56+-0.10 | 0.19+-0.04 | 0.17+-0.05 | 0.11+-0.03 | 0.12+-0.03 |
| $\bar{\mathbf{x}}_t\ \vartriangle\ \bar{\mathbf{x}}_t'$ | **0.42+-0.07** | 0.23+-0.05 | **0.14+-0.04** | 0.09+-0.03 | **0.04+-0.01** |
| $\bar{\mathbf{x}}_t\ \triangledown\ \bar{\mathbf{x}}_t'$ | 0.44+-0.09 | **0.16+-0.03** | 0.18+-0.05 | 0.13+-0.03 | 0.08+-0.02 |
We observe that RandNoRepl significantly outperforms Rand. However, VarNoRepl performs similarly to Var, as Var is designed to reduce uncertainty across the entire input domain. Due to this characteristic, Var tends not to sample the same input repeatedly when many inputs remain unsampled. Consequently, Var and VarNoRepl behave similarly in the early stages (similar regret). Only when a sufficient number of observations have been made does Var, due to correlations among evaluated inputs, begin to select repeated samples and diverge from VarNoRepl.
However, since Rand, RandNoRepl, Var, and VarNoRepl are not specifically designed to target the top-$5$ set, our solutions continue to outperform them.
We forgot to mention that in both the Branin and Goldstein experiments, all algorithms were initialized with a set of $3$ observations. As a result, in the final $3$ iterations, RandNoRepl and VarNoRepl were required to select repeated samples.
We will incorporate these baselines and the discussion into the revised paper, and we sincerely hope this will address any remaining questions you may have. | Summary: The paper poses a novel active learning problem formulation of active set ordering, in which we aim to identify the data points that yield the top- and bottom-$k$ values via a given objective function.
This active learning goal serves as a compromise between Bayesian experimental design which focuses solely on learning and Bayesian optimization targeting optimization, and poses as an alternative to level-set estimation, especially in scenarios where a level-set threshold is not easily determined.
The authors first define an appropriate metric of regret, propose using the mean prediction of the surrogate Gaussian process to generate an ordering of the data points to recommend to the user, and finally develop an acquisition function that aims to reduce an approximation of the regret resulting from that posterior predictive ordering.
The paper presents various theoretical results bounding the regret of the proposed algorithm, and experiments are conducted to illustrate the empirical effectiveness of the algorithm.
Strengths: I find active set ordering to be an interesting active learning problem that is related to other common active learning problems (experimental design, level-set estimation, Bayesian optimization).
The various algorithmic choices made throughout the paper are reasonable and well-motivated by theoretical insights.
The experiments include a wide range of synthetic and real-world tasks, and do a good job showing that the proposed strategy yields competitive performance against baselines.
Weaknesses: - The presentation of Sections 3 and 4 is a bit hard to follow.
I understand Section 3 serves as the base case where we develop the core tools which are then further extended in Section 4, but the narrative has a jumping-back-and-forth feel that I find somewhat confusing.
- The claims the authors make in Section 1 to motivate the problem might be too strong.
I would say something along the lines of, this work presents an alternative to LSE when setting the threshold is difficult (as opposed to that it should replace it, since sometimes a desirable threshold is known, in which case LSE should be preferred).
- I would include LSE acquisition functions (e.g., STRADDLE) among the baselines in the experiments.
Technical Quality: 3
Clarity: 2
Questions for Authors: - The search space sizes in the experiments are quite small.
Does the algorithm scale well to large search spaces?
What’s the computational complexity?
Does it take quadratic time with respect to search space size (because we need to iterate over all pairs)?
- Any insights on how the different variants behave and which should be preferred in which situations?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for dedicating their time to review our paper and for acknowledging the interesting research problem situated at the intersection of experimental design, level set estimation, and Bayesian optimization. We are delighted to learn that the reviewer finds our paper to be theoretically sound, well-motivated, and supported by a wide range of experimental results. We would like to address and clarify the remaining concerns as follows.
> I would include LSE acquisition functions (e.g., STRADDLE) among the baselines in the experiments.
As LSE requires a known (or implicit) threshold, which is absent in our problem setting, including their acquisition functions (e.g., STRADDLE) among the baselines in the experiments is not immediately obvious to us.
> The search space sizes in the experiments are quite small. Does the algorithm scale well to large search spaces? What’s the computational complexity? Does it take quadratic time with respect to search space size (because we need to iterate over all pairs)?
Updating the GP posterior belief incurs $\mathcal{O}(m_t^3 + n m_t^2)$ computational complexity (including $\mathcal{O}(m_t^3)$ for training and $\mathcal{O}(n m_t^2)$ for prediction, where $m_t$ is the number of observations at iteration $t$ as we use the exact GP model). This complexity can also be reduced using sparse GP approximation. Given the GP posterior belief, our algorithm involves the following major steps outlined in Algorithm 1:
+ Line 3: Constructing $\mathcal{S}_{\mu_t}(k)$ takes $\mathcal{O}(n \log k)$ to find the top-$k$ inputs by using a max heap of size $k$ and scanning through the GP posterior mean of all inputs.
+ Line 4: We need to scan through the elements in $\\mathcal{S}\_{\\mu\_t}(k) \\times \\mathcal{S}^c\_{\\mu\_t}(k)$, so it takes $\\mathcal{O}(k (n - k))$.
Therefore, the runtime of each iteration is $\mathcal{O}(m_t^3 + n m_t^2 + n \log k + k (n - k))$ which is not quadratic in the search space size ($n$). Specifically, it is linear in $n$.
To further strengthen our experimental results, we have included additional experiments featuring a larger search space, $|\mathcal{X}| = 1000$, and an increased input dimension, $d=6$. They can be found in the attached PDF file of the general response.
> Any insights on how the different variants behave and which should be preferred in which situations?
Based on the cumulative regret bound, we do not have a preference for any particular variant since they all result in the same sublinear cumulative regret bound. However, when $k = 1$, we prefer $\\bar{x}\_t \\triangledown \\bar{x}'\_t$ because it is equivalent to $\\text{arg}\\max\_{x \\in \\mathcal{X}} u\_t(x)$, which can be computed in $\\mathcal{O}(n)$ time without the need to find the pair $(\\bar{x}\_t, \\bar{x}'\_t)$ (Remark 4.5).
When $k = n - 1$, we prefer $\\bar{x}\_t \\vartriangle \\bar{x}'\_t$ because it is equivalent to $\\text{arg}\\min\_{x \\in \\mathcal{X}} l\_t(x)$, which can also be computed in $\\mathcal{O}(n)$ time without the necessity of identifying the pair $(\\bar{x}\_t, \\bar{x}'\_t)$ (Remark 4.5).
---
We will revise the paper to incorporate the additional comments on the flow between Sections 3 and 4 as well as the motivation of the problem. We appreciate your patience in reading our response and sincerely hope that the above clarifications, along with the additional experimental results, address your remaining concerns and enhance your perspective on our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
Regarding LSE, what some do in practice is, for example, to set the threshold to be at some reasonable quantile of the observed data. My goal is to see how much improvement in terms of regret your proposed method leads to compared to naively using LSE in that way in your setting.
Overall, I will keep my score.
---
Rebuttal 2:
Title: Thank You and Further Clarification Regarding LSE
Comment: Thank you for your response and for the score in support of our submission. We are eager to address any concerns you may have about our paper, so we would like to provide more details on
1. Why comparing an LSE solution with our proposed solutions in the experiments is not straightforward.
2. While LSE and finding the top-$k$ set are not empirically comparable, our theoretical analysis suggests that our solutions are as efficient as an LSE approach, even though the top-$k$ problem presents greater computational challenges.
3. Our solutions are preferred in the scenario you mentioned, specifically, when setting the threshold as a quantile of the observed data, compared to LSE.
---
**Empirical comparison**
We agree with the reviewer that in practice, the threshold can be set as a reasonable quantile of the observed data. However, in our problem setting, we are only provided with the size of the top-$k$ set (i.e., the number $k$) without any information about the threshold. This lack of threshold knowledge is one of the key motivations behind our proposed problem, as highlighted in the introduction (lines 25-26):
> "However, without domain knowledge of the black-box function, it is easy to set a threshold that leads to undesirably large or small level sets."
Therefore, running LSE with only this information (i.e., $k$) is not feasible.
If we were to apply LSE with a threshold based on a quantile, the resulting superlevel set would differ from the top-$k$ set in our problem, making a direct comparison between the two methods invalid. This is because estimating a larger set often requires more samples than estimating a smaller set empirically. Additionally, even if we were to carefully select a threshold so that the superlevel set matches the top-$k$ set, the comparison would still be unfair, as the threshold is computed using the knowledge of the true top-$k$ set that is assumed to be unknown in our problem setting and should not be utilized by any solution.
---
**Theoretical Implication**
While LSE and finding the top-$k$ set are not empirically comparable for these reasons, our theoretical discussion on the upper bound of cumulative regret suggests that our solutions are as efficient as the LSE approach in [8], which is an improved version of STRADDLE.
This is particularly noteworthy since our problem may be inherently more challenging than LSE. For example, if the black-box function is known, identifying the level set given a threshold can be done by simply comparing all function evaluations against the threshold, which requires a time complexity of $\mathcal{O}(n)$, where $n$ is the size of the input domain. However, finding the top-$k$ set (for $k > 1$) involves using a heap data structure, resulting in a time complexity of $\mathcal{O}(n \log k)$, which is more computationally demanding.
---
**Motivation for Our Solutions over LSE**
Finally, we would like to emphasize that your observation about the practical use of LSE, where the threshold is chosen as a reasonable quantile of the observed data, precisely highlights the need for our work.
For simplicity, consider a scenario where the input domain consists of 100 points, and the desired level set corresponds to the third quartile (i.e., 25\% of the data points have function evaluations at or above this threshold). One could set $k = 100 \times 0.25 = 25$ and use our solutions to discover the top-$25$ set. The boundary of this top-$25$ set should correspond to the level set they wish to find, without the need to estimate the threshold (such as the third quartile).
In contrast, if one were to use LSE, they would need to estimate the threshold as the third quartile from the observed data. This estimation could be inaccurate in practice due to noise in the observation and a limited number of initial observations. Therefore, even aside from sample efficiency, our proposed solutions are more desirable than LSE in such scenarios. | Summary: This paper generalizes the best $k$-arm identification problem to Gaussian processes with the goal to estimate the set of the best $k$ function evaluations $f(x)$ on a finite domain $X$, where $k=1$ corresponds to standard Bayesian optimization. The proposed regret notion is a natural adaptation of the common regret $f(x*)-f(x_t)$ in Gaussian process bandits.
The paper is accompanied by some first experiments on benchmark datasets.
Strengths: Interesting problem.
---- rebuttal ----
changed from 5 to 6.
Weaknesses: Finiteness assumption seems rather strong and limits this work. Also, the significance of the results (Thm 3.7 and 4.4) is not fully clear to me, as the paper seems to be mostly relying on known techniques (for kernelized bandits, and Gaussian process bandits), while the related work is not sufficiently well discussed (see questions and limitations).
Am happy to raise my score, if my concerns are addressed. Specifically about the comparison to previous and related literature.
The definition of $\pi_*(X_0,X_1)$ seems only partial, i.e., what if there exists some $x_0$ that are larger than some $x_1$ and some $x_0'$ that are smaller than some $x_1'$
Minor comments:
* The notation is a bit convoluted. E.g., do you really need double bars and dashes for $x$. Even just tilde&dash is not great. E.g., in the beginning of section 3 you could just do $x,x'$ without the tilde (or you use $y$ etc.).
* Please do not mix definitions (in particular of important notation) and lemmas (e.g., Lemma 3.6, 3.1, ...).
Technical Quality: 3
Clarity: 2
Questions for Authors: What is the computational complexity/runtime of your approach? Is the runtime exponential in $k$?
How does your work relate to Gaussian bandit papers with correlated/dependent arms (e.g., Gupta et al. 2021, Pandrey et al 2007)
How does the results here relate to e.g., the "Interactive submodular bandit" by Chen et al. [ICML, 2017] (and related), where a submodular function is maximized in a similar fashion (in a GP-style context). Can your $k$ best selection problem be cast as a submodular (or even simply modular) set optimization problem? Are the two regret variants comparable?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The finiteness assumption on $X$ seems rather restrictive. Can $X$ be countable, or even say a compact subset / interval? Standard work on Gaussian processes with regret bounds (Srinivas et al., etc.) does not have this restrictions. For finite $X$ less assumptions might be possible, see e.g., Theorem 1 by Krause & Ong (NeurIPS 2011) (for example, no explicit assumptions on the rkhs norm as you do in Lemma 2.2, which they only require for arbitrary domains $X$).
Also in general the related work is not discussed sufficiently well, see questions.
Please, also see the questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to review our paper and for appreciating our interesting research problem. We will address the questions and concerns raised by the reviewer as follows.
> the significance of the results is not fully clear to me, as the paper seems to be mostly relying on known techniques
Theorems 3.7 and 4.4 are to show that the cumulative regret of our proposed solution is sublinear. This is a desirable property for BO solutions ([18]) ensuring that the simple regret approaches 0. While the techniques are mainly drawn from the BO literature, it is noted that
+ Many existing BO works also employ similar techniques to demonstrate sublinear cumulative regret, with their novelty often lying in new problems. Similarly, our paper makes a unique contribution by addressing the problem of active (multiple) set ordering.
+ While we build on established techniques, our paper works with a new notion of regret (pairwise and set ordering) that differs from those in the existing BO literature.
+ To the best of our knowledge, we are the first to investigate BO from this pairwise ordering perspective.
> Is the runtime exponential in $k$?
In line 3 of Algo 1, it takes $\mathcal{O}(n \log k)$ to find the top-$k$ inputs by using a max heap of size $k$ and scanning through the GP posterior mean of all inputs. In line 4 of Algo 1, we need to scan through the elements in $\\mathcal{S}\_{\\mu\_t}(k) \\times \\mathcal{S}^c\_{\\mu\_t}(k)$, so it takes $\\mathcal{O}(k (n - k))$. Together with the updating of GP, the runtime of an iteration is $\mathcal{O}(m_t^3 + n m_t^2 + n \log k + k (n - k))$, which is not exponential in $k$. Specifically, it is linear in $k$ as the last term is bounded by $k(n-k) < k n$. Due to lack of space, please refer to our response to reviewer Qpjm for a detailed derivation of the complexity.
> How does the results here relate to e.g., the "Interactive submodular bandit" by Chen et al. [ICML, 2017]?
Our results are different from those in Chen et al. (ICML, 2017):
+ First, Chen et al. (2017) model the utility function as *a set function* that takes a set of inputs. Hence, the submodular property is utilized to avoid the exponential nature of the combinatorial problem. In our problem, the utility function (the black-box function) is *defined for each individual input* in the domain, which is equivalent to an additive set function. Our goal is to find the set of $k$ inputs with the highest function evaluations. This is simpler than the general submodular set function, as it *does not incur exponential time complexity for exact solution*.
+ Second, the set of interest $S_j$ (using the notation of Chen et al. 2017) is *built sequentially through interaction*, meaning $S_j$ is the set of all sampling inputs up to that point. However, in our problem setting, the top-$k$ set is estimated as the set of top $k$ inputs with the highest GP posterior mean, denoted as $\mathcal{S}_{\mu_t}(k)$. The inputs in this set are *not necessarily those that have been sampled*.
+ Third, the optimal set of interest $S^*_j$ is defined as the set having a cardinality of *at most* $T_j$ elements, where *$T_j$ depends on the sampling procedure* (refer to Equation (2) in Chen et al., 2017). This differs from from our optimal (and estimated) top-$k$ set which has a cardinality of *exactly* $k$ where *$k$ is specified before the sampling procedure*.
Therefore, the notion of regret used by Chen et al. (2017) relies on a different concept of optimal set that does not apply to our context. Additionally, it is also noted that the regret definition in Chen et al. (2017) is not based on pairwise ordering, which is the building block of our regret.
> How does your work relate to Gaussian bandit papers with correlated/dependent arms
We assume that Gupta et al. 2021 refers to the paper titled "A unified approach to translate classical bandit algorithms to the structured bandit setting" and Pandrey et al. 2007 refers to the paper titled "Multi-armed bandit problem with dependent arms". These two papers are different from our work as follows.
+ The problem: Both Gupta et al. (2021) and Pandrey et al. (2007) are interested in the arm with the maximum reward, while our work focuses on the top-$k$ set.
+ Model of dependent arms: Gupta et al. (2021) assumes parametric models for the mean rewards, while Pandrey et al. (2007) assumes that the arms are grouped into known clusters and that the rewards of arms are described by a known generative model with unknown param. In contrast, our work uses a GP (non-parametric) without any assumptions about input clusters.
Given the rich literature, we selected the most relevant works in the kernelized bandit and BO literature (modeling the dependency with GP) [1, 5, 8, 14, 18]. Still, our work differs from them: *the goal of finding top-$k$ set(s)*, *the regret definition based on pairwise ordering*, and *the prediction based on GP posterior mean*.
> The finiteness assumption on $X$ seems rather restrictive.
+ The top-$k$ set is undefined for continuous domain. E.g., take $f(x) = -x^2$ with $x \in [-1,1]$. While $x = 0$ is the maximizer, pinpointing the input with the 2nd highest function value is impossible (i.e., defining a top-$2$ set is problematic). Thus, finiteness is critical for solving the problem of finding the top-$k$ set.
+ The finiteness assumption is also common in environment monitoring applications (our motivation) such as in [8].
> assumptions on the rkhs norm ... only require for arbitrary domains $X$
Thank you for the insightful comment regarding the RKHS norm. We will revise the paper to include this additional consideration as an alternative.
---
Thank you for taking the time to read our response. We sincerely hope that the explanations provided above address your concerns and enhance your perception of our paper. We will incorporate your additional suggestions regarding the notation into the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. You are also absolutely right, that the problem is not well-defined in the continuous case. I raised my score.
---
Reply to Comment 1.1.1:
Title: Thank You for Reconsidering Our Submission and Raising the Score
Comment: We sincerely appreciate your reconsideration of our work and the increase in the score. We will carefully incorporate your valuable suggestions into our revisions to enhance the quality of our work. In the meantime, we remain open and eager to provide any further clarifications you may need. | Summary: This paper introduces the "active set ordering" problem, which aims at recovering the top-k actions in a set by strategically sampling actions. The authors formally define the problem in the regret minimization setting and propose an algorithm for that. They authors upper bound the regret and run experiments the test the proposed algorithm.
Strengths: The authors introduced the problem of recovering top-k actions in the regret minimization setting, and developed an algorithm for that. The authors theoretically show their algorithm enjoys a \sqrt{T} type of regret (after ignoring some problem dependent quantities). Experimental results show the proposed algorithm has good performance.
Weaknesses: 1. The authors didn't explicitly quantify some problem-dependent quantities in their regret bound, e.g., \gamma_T and \beta_T. How large these quantities are in different settings?
2. A lower bound analysis is missing, which make it even harder to know if the upper bound is tight or not.
3. While I understand the studied setting is slightly different from top-k arm identification, the proposed algorithm is actually quite similar to algorithms proposed in top-k arm identification: I believe some appropriate acknowledgement is missing in the paper.
4. The wording "set ordering" in misleading: the goal of this paper is not to recover the set ordering but simply identifying the top-k actions in the regret minimization setting.
Technical Quality: 3
Clarity: 3
Questions for Authors: See comments above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for dedicating their time and effort to reviewing our paper and for recognizing the regret analysis and the experimental results. Additionally, we would like to draw your attention to two other contributions: multiple top-$k$ sets, and the new perspective of ordering for BO in the global response. We will address your remaining concerns as follows.
> 1. The authors didn't explicitly quantify some problem-dependent quantities in their regret bound.
We follow most of the works in the BO and LSE literature (e.g., [8,18]) to demonstrate a desirable asymptotic property of the algorithm: sublinear cumulative regret, i.e., $\lim_{T \rightarrow \infty} R_T / T = 0$. It implies the convergence of the algorithm as $\\min\_{t \\le T} r\_{ \\pi\_{\\mu\_t}( \\mathcal{S}\_{\\mu\_t}(k), \\mathcal{S}^c\_{\\mu\_t}(k) ) } \\le R\_T / T$, i.e., vanishing per-round regret.
In this line of work (from [18]), the cumulative regret bound is often expressed in terms of problem-independent quantities $\beta_T$ and $\gamma_T$, which depend on the kernel.
The work of [18] discusses the values of $\gamma_T$ for several common kernels (which are referred to in lines 151 and 200). We will clearly state the value of $\gamma_T$ in the revised paper. For example, $\gamma_T = \mathcal{O}((\log T)^{d+1})$ for the squared exponential (SE) kernel (this kernel is used in our paper). The value of $\beta_T$ is elaborated in Lemma 2.2. Hence, by substituting $\gamma_T$ and $\beta_T$ into the cumulative regret bound and simplifying, we can obtain $R_T \le \mathcal{O}^*(\sqrt{T (\log T)^{2d}})$ (where $\mathcal{O}^*(\cdot)$ denotes asymptotic expressions up to dimension-independent logarithmic factors and $d$ is the dimension of the input).
This is the same as the cumulative regret bound of GP-UCB that is known to match the lower bound of the Bayesian optimization (BO) problem for the SE kernel. We will rely on this result to discuss how tight our cumulative regret bound is relative to the lower bound of the active set ordering problem in the following paragraphs.
> 2. A lower bound analysis is missing.
Let the lower bound of the active set ordering problem be the lower bound of the cumulative regret of the worst-case problem instance over all possible values of $k$. Then it should be at least as large as the lower bound of the special case where $k = 1$. This special active set ordering problem with $k=1$ is the Bayesian optimization (BO) problem, according to our Remarks 4.1 and 4.5. Furthermore, BO has known lower bounds for several common kernels: for example, for the SE kernel, the lower bound of the cumulative regret is $\Omega(\sqrt{T(\log T)^{d/2}})$, (from the paper "Lower bounds on regret for noisy Gaussian process bandit optimization" by Scarlett et al., 2018). Hence, the lower bound of the active set ordering problem is at least $\Omega(\sqrt{T(\log T)^{d/2}})$. Therefore, for the SE kernel, the cumulative regret bound $R_T \le \mathcal{O}^*(\sqrt{T (\log T)^{2d}})$ of our algorithm matches the lower bound up to the replacement of $d/2$ by $2d + O(1)$.
We have discussed the lower bound of the problem by considering the worst-case problem instance over all values of $k \ge 1$, rather than the lower bound of the problem for a specific value of $k \ge 1$. Thus, one may wonder if finding the top-$k$ set for a specific value of $k > 1$ is easier than solving it for $k = 1$ (BO problem). Regarding this question, we suspect that finding the top-$k$ set in active set ordering for a specific value of $k > 1$ is at least as hard as finding the top-$1$ set, i.e., the Bayesian optimization (BO) problem, for the following reasons.
+ By intuition, suppose the $f$ is known. Finding the top-$k$ set requires $\mathcal{O}(n\log k)$ time complexity, while finding the top-$1$ set only requires $\mathcal{O}(n)$ time complexity where $n = |\mathcal{X}|$ is the size of the input domain.
+ Moreover, BO is reducible to the problem of finding a top-$k$ set for any $k \ge 1$. Let us consider a value of $k > 1$ and a BO problem instance defined by a function $f(x)$ where $x$ belongs to a finite input domain $\\mathcal{X}\_f$. Suppose we know an upper bound $U$ of $f(x)$, i.e., $U > \\max\_{x \\in \\mathcal{X}\_f} f(x)$ (this upper bound need not be strict).
We can create a new function $g$ defined on $\\mathcal{X}\_f \\cup \\mathcal{X}\_g$ where $|\\mathcal{X}\_g| = k-1$ such that:
+ For all $x \in \mathcal{X}_f$, $g(x) = f(x)$
+ For all $x' \\in \\mathcal{X}\_g$, $g(x') = U > \\max\_{x \\in \\mathcal{X}\_f} f(x)$.
Then, both the maximizer $x\_*$ of $f$ and $\mathcal{X}\_g$ are part of the top-$k$ set $\\mathcal{S}(k)$ of $g$.
As $|\mathcal{X}\_g| = k - 1$, we have $\\{x\_*\\} = \\mathcal{S}(k) \\setminus \\mathcal{X}\_g$.
Therefore, the problem of maximizing $f$ is reducible to the problem of finding a top-$k$ set of $g$, i.e., finding the top-$k$ set is as least as difficult as solving BO. Hence, the lower bound of the active set ordering problem is at least that of BO problem.
> 3. I believe some appropriate acknowledgement (of top-k arm identification) is missing in the paper.
As you may have noticed, we discussed the difference between our work and the best-$k$ arm identification problem in *the footnote* on page 2. We also acknowledged the similarity with best-$k$ arm identification [12] in the intuition of choosing the input pair in *line 185*. Yet, our approach is different in several ways:
+ The choice of sampling input (Lemma 3.6).
+ The regret bound based on the maximum information gain.
+ The justification of mean prediction (Sec. 3.2).
+ Multiple top-$k$ sets (Remark 4.6).
+ A new perspective on the well-known BO solution.
---
Thank you for your patience in reading our response. We sincerely hope that the above clarifications have improved your opinion of our paper. We will carefully incorporate your valuable feedback and the points discussed above into the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. I have increased my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank You for Your Reconsideration and Improved Score
Comment: Thank you very much for your reconsideration and the improved score. We sincerely hope that any previous concerns have been satisfactorily addressed, as we did not identify any remaining issues in your latest comment. We truly appreciate the discussion and will carefully integrate your valuable feedback into the revised paper. Please do not hesitate to reach out if you require any further clarification until the end of the discussion period. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their time and effort in reading and evaluating our paper. We are encouraged by the positive feedback and appreciation of our work regarding its novelty (reviewers Qpjm and i8Rj), theoretical soundness (reviewers Qpjm and i8Rj), and experimental results (reviewers gNpn and Qpjm).
---
We are pleased to summarize our contributions as follows.
+ We address the problem of estimating the top-$k$ set of a black-box function modeled by a Gaussian process. Specifically, we propose novel regret notions of pairwise ordering and set ordering. Then, we analyse the regret to justify our prediction and sampling strategy.
+ We extend our solution to accommodate the estimation of multiple top-$k$ sets, as discussed in Remark 4.6 and Section 5.2, driven by practical motivation in environmental monitoring applications.
+ We offer a novel perspective on the well-known GP-UCB algorithm through the lens of ordering (that has not been previously explored in Bayesian Optimization literature), which yields several nuanced insights (Remark 4.5).
---
In our response to the reviewers, we have provided additional clarifications, including:
+ Discussion on the lower bound of the proposed problem (reviewer gNpn).
+ Comparison between our work and existing studies (reviewer 2jNL).
+ The linear time complexity in the input domain size and linear time complexity in $k$ (reviewers 2jNL and Qpjm).
+ Additional experiments involving a 6-dimensional input domain in the PDF file attached in this response (reviewer i8Rj).
---
We genuinely hope that our response effectively addresses the reviewers' concerns and enhances their opinion of our paper. Your feedback is invaluable to us, and we are eager to incorporate it to further improve our work.
Pdf: /pdf/db1d9820890757feef815ad49204f664ca1781f5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness | Accept (poster) | Summary: This paper presents HOI-Swap, a diffusion-based video editing framework for object swap editing. HOI-Swap involves two stages. In the first stage, the authors train an image-editing model to swap object in one frame. In stage II, the authors first warp a video from the edited frame using the optical flow sampled from the source video. Then a video diffusion model is trained to reconstruct the whole video from the warped frame sequences. Through this two-stage approach, HOI-Swap successfully addresses three main challenges in HOI video editing: (a) HOI-aware capabilities, (b) spatial alignment of the object with the hand, and (c) temporal alignment with the source video. Experiments demonstrate that HOI-Swap outperforms both image-editing and video-editing baselines.
Strengths: 1. This paper address a novel task: HOI video editing.
2. This paper pinpoints the three main challenges facing this task.
3. The video-editing stage employs optical flow from the source video to achieve temporal alignment with the source video.
4. HOI-Swap demonstrates better performance than existing video-editing approaches in HOI video editing.
Weaknesses: 1. Stage I does not seem to address HOI-awareness explicitly. The authors should explain why the image-editing model can perceive HOI.
2. The authors did not mention the generalizability of their method across unseen kinds of objects.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How can the image-editing model in stage I perceive HOI? The authors should provide more explanation and maybe some more ablation studies to support their results.
2. The whole pipeline seems applicable to general object-swap tasks. Have the authors tested the model on other objects?
3. Are the backgrounds in the test set all seen by the model during training?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer rdgi for the helpful comments and for providing thoughtful feedback on our work.
---
**1. HOI awareness in stage I**
> How can the image-editing model in stage I perceive HOI?...maybe some more ablations
The model’s ability to perceive HOI fundamentally relies on the data it is trained on. By providing the model with a large volume of HOI images, it learns the diverse ways in which hands interact with various objects. HOI-rich data acts as the driving force, equipping the model with the necessary insights to accurately replicate and understand these interactions.
In terms of the specific techniques, in stage I, we mask the source frame with a square bounding box (Ln 162-166), as opposed to using the source object’s segmentation mask as in [57]. This masking strategy not only directs the model to fill the predefined masked area but also to generate plausible hand-object interactions aligning with the source frame’s hand position and the reference object. To demonstrate this point, we experimented with a variant of the stage I model that uses the original object's segmentation mask instead. As illustrated in Figure R2 of the rebuttal PDF, this variant struggles with grasp changes when the reference and source objects differ, thereby reinforcing the effectiveness of our chosen approach. We will incorporate this discussion in the paper.
---
**2. Applicability to general object swaps, testing on other objects?**
Yes. As noted in Ln 567-568, we designed our train-test split based on object instances, to evaluate our model’s ability to generalize to unseen object instances, such as a new mug not encountered during training. Our qualitative results indicate that HOI-Swap can handle novel object instances and deliver good edits. Moreover, as demonstrated in Figures 4 and 8 of the paper, HOI-Swap performs robustly in general object-swap tasks, in cluttered scenes where no hand is present, showcasing its strong capability across diverse swapping scenarios.
On the other hand, object category-level generalization requires the model to generate plausible interactions with a completely unseen object category (e.g., accurately depicting a hand holding scissors, despite the model never encountering scissors during training), is considerably more challenging and not validated here. This requires the model to acquire a broader understanding of novel interactions. We acknowledge this as an exciting direction for future work.
> The whole pipeline seems applicable to general object-swap tasks. Have the authors tested the model on other objects?
Following the discussion above, our experimental setup aims at assessing generalization across new object instances. If the reviewer could specify or elaborate on “other objects,” we would be happy to address this further.
---
**3. Background generalization**
> Are the backgrounds in the test set all seen by the model during training?
We provide zero-shot video editing results on two datasets (EPIC-Kitchens and TCN Pouring), where the test videos feature backgrounds not encountered by the model during training. This showcases HOI-Swap’s generalization ability to new backgrounds.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their response. I think most of my concerns are addressed.
I want to clarify Question 2. By "general object-swap tasks" I mean object-swap tasks without HOI and "other objects" means different categories of objects. The authors have addressed my concerns on object category-level generalization. I hope the authors include more discussion of the general object-swap task in final manuscript.
Generally speaking, this is an interesting and novel paper. For now, I will keep my initial score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer rdgi's further comment
Comment: Thank you for your insightful comments and for clarifying your question regarding general object-swap tasks. In the experiments, we indeed include a diverse set of images to evaluate our model comprehensively. As noted in Ln 259-261, our evaluation involves both HOI-contact and non-HOI-contact scenarios:
+ Our editing benchmark comprises 5,000 test images, of which 20.1% represent non-HOI-contact scenarios, highlighting our model's ability to handle general object-swap tasks where hand-object interaction is absent.
+ For qualitative results, row 3 and 4 of Figure 4 in the main paper, row 3, 5 and 6 of Figure 9 in Supp. showcase general swapping scenarios without human hands, emphasizing our model's versatility across different contexts.
We appreciate your suggestion and will expand on this discussion in the updated manuscript. Thank you once again for your thoughtful review. | Summary: This work presents a novel approach for object insertion in hand-object interaction scenarios. The proposed approach consists of two stages: image-based editing to for precise alignment of hand with the inserted object, and video-based editing for motion alignment with the original video. The model is trained in a self-supervised manner and also imparts a varying level of controllability to adjust the degree of motion alignment based on object changes. Extensive experiments on HOI4D, EgoExo4D, EPIC-Kitchens and TCN Pouring datasets show the effectiveness of the proposed approach in various scenarios.
Strengths: - The proposed 2-stage approach to decompose the complexity of the task is intuitive and handles the challenges of hand-object interaction changes, correct object placement and temporal alignment of HOI motion effectively.
- The first stage can also be used in a stand-alone manner for image editing tasks and the controllability in motion alignment is a useful feature.
- The self-supervised training strategy circumvents the need for collecting paired training data for this task..
- Extensive experiments (Tab.1) on HOI4D & EgoExo4D show the effectiveness of the proposed approach over several baselines for both image and video editing tasks. The zero-shot generalization setting to EPIC-Kitchens and TCN Pouring datasets are also considered.
- Visualizations in Fig.4,5 are helpful in understanding the capabilities of the proposed approach.
Weaknesses: - Several details about the experimental setup are missing from the main paper. These details are important to understand the scope of the claims in the paper. While the supplementary contains more details, it'd be helpful to have more clarity on the following aspects:
- For evaluation on HOI4D & EgoExo4D, how is the test split created? Do the held-out videos contain different object categories or different instances from the same category or different subjects performing the experiments or different actions being performed? Having quantitative experiments in these different settings would help in understanding the benefits and limitations of the proposed approach.
- Do the results in Tab.1 span all 4 datasets? Again, it'd be helpful to have a breakdown of the results in different settings to understand where the proposed approach is most effective.
- Are all the baselines trained in the same setup as the proposed approach? I understand that it might not be feasible to implement/train all baselines due to differences in architecture and compute requirements, but it'd be useful to clearly describe the protocols followed for each baseline. Are these baselines retrained or used in a zero-shot manner? For example, AffordDiff [62] only takes a single RGB image of the object as input and generates the HOI image, it does not take the reference video into account so it is expected that the object orientation may not be consistent with the video (L43-50). AffordDiff also allows for controllability in the layout & orientation of the generated hand so it can be modified to match the hand in the reference video. I'm not asking to reimplement AffordDiff to adapt to this new setting, but it'd be useful to have these details in the main paper to understand the differences between baselines and the proposed approach.
- While the 2-stage decomposition of the approach makes sense, it'd also be useful to quantitively verify that it is indeed better than the 1-stage analog. For example, in Fig. 3, Stage-2 can take the DINO encoding of the reference object instead of the generated image from Stage-1 to create a 1-stage approach. This is not required for rebuttal, but a suggestion to validate the 2-stage approach.
Technical Quality: 3
Clarity: 2
Questions for Authors: I need clarifications on the following aspects to better understand the results (more details in the weaknesses above):
- For evaluation on HOI4D & EgoExo4D, how is the test split created? Do the held-out videos contain different object categories or different instances from the same category or different subjects performing the experiments or different actions being performed?
- Do the results in Tab.1 span all 4 datasets? It'd be helpful to have a breakdown of the results in different settings to understand where the proposed approach is most effective.
- Are all the baselines trained in the same setup as the proposed approach? It'd be useful to clearly describe the protocols followed for each baseline. Are these baselines retrained or used in a zero-shot manner?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: L330-332 in the main paper defers the limitations and failure modes to the supplementary without providing any insights. Please add these details to the main paper.
---
I have read all the reviews and the rebuttal. I thank the authors for providing additional clarifications, which help me understand the paper better and I have increased my score to Weak Accept.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer gbKB for the helpful comments and for providing thoughtful feedback on our work.
---
**1. Clarification on experimental setup**
***(i) Data split***
As we focus on the problem of “object” editing, videos are split based on object instances (Ln 567-568). The held-out videos feature different instances of the same object categories, testing the model’s ability to swap unseen instances (e.g., a different mug not seen during training) rather than entirely new categories (e.g., scissors, if scissors were not included in the training data). Addressing object category-level generalization presents a significantly more challenging problem, requiring the model to generate plausible interactions with a completely unseen object category (e.g., accurately depicting a hand holding scissors, despite the model never encountering scissors during training). We acknowledge this as an exciting direction for future work.
Inspired by your comment on other data splitting ways, we conducted experiments with two new splits: (1) across different subjects, and (2) across different actions. The results are reported in Table R1 in the rebuttal PDF. HOI-Swap consistently outperforms baseline approaches in these new settings, demonstrating its effectiveness.
***(ii) Baselines implementation***
Baselines including PBE [57], AnyDoor [8] and AnyV2V [29] are adopted in a zero-shot manner. Afford Diff [62] utilizes the same training data source (HOI4D) as ours. We do not provide additional layout or hand orientation information to Afford Diff. VideoSwap [18], a one-shot-tuned video editing approach, is trained on the source video using the default setup in their official repository.
Note that we also evaluate on two out-of-domain datasets (EPIC-Kitchens and TCN pouring), where all approaches (except VideoSwap which is tuned on the source video), including ours, are applied in a zero-shot manner.
We appreciate your feedback and will ensure these clarifications are included in the main paper.
***(iii) Results breakdown***
As suggested, in Table R2 of the rebuttal PDF, we provide a breakdown of video editing results, separating results from in-domain videos (HOI4D) and out-of-domain videos (EPIC-Kitchens & TCN Pouring). Note that the video editing results in Table 1 span three datasets (HOI4D, EPIC-Kitchens and TCN Pouring) with EgoExo4D excluded due to its absence of high-FPS mask annotations (see Ln 575-577). We observe that the performance gain of HOI-Swap is higher for in-domain videos than out-of-domain videos. We will update our manuscript to incorporate these discussions.
---
**2. Validation of the two-stage approach**
We appreciate the suggestion to verify the efficacy of our two-stage design compared to a one-stage counterpart (though the reviewer says it’s not necessary for rebuttal) and have conducted additional experiments to assess this. We train one VideoLDM [2] that takes the reference object image and the masked frame sequence as input and outputs the edited video; this serves as a trained baseline for a one-stage object swapping approach.
We provide both qualitative and quantitative comparisons in Figure R1 of the rebuttal PDF. The one-stage approach does not yield satisfactory results, specifically failing to preserve the reference object’s identity. We believe this deficiency stems from the model’s need to handle both spatial and temporal alignments simultaneously. We will add this analysis in the paper.
---
**3. Inclusion of limitations in main paper**
We will add a summary of the key limitations and failure modes (currently discussed more in Supp. and Supp. video) directly into the main text to ensure comprehensive accessibility of these details within the main paper.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: I appreciate all the clarifications provided by the authors. I have increased the score to Weak Accept and suggest to include these additional details in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback and valuable support! We are pleased to confirm that we have fully addressed your concerns. We will make the necessary updates to the manuscript in light of your thoughtful suggestions. | Summary: This article focuses on proposing a two-stage network HOI-Swap, a video editing framework designed for precise object edits with HOI awareness.To address the problem of real perception of HOIs as well as spatial and temporal alignment with the original video, HOI- Swap first stage focuses on solving HOI awareness and establishing spatial alignment by training an image restoration diffusion model to swap objects in one frame. The second phase deforms video sequences from edited frames by tracking inter-frame points with randomly sampled optical flow, and trains a video diffusion model to generate new videos.
Strengths: 1. This paper is well organized and easy to understand.
2. The ablation experiment fully confirms the effectiveness and contribution of the improvement measures proposed in this paper.
Weaknesses: 1. Limited innovation. The methodology of this paper is a combination of existing work and is more like a technical report than an academic paper.
2. Lack of dynamic HOI awareness transfer. The second stage does not embody HOI interaction awareness. The single-frame propagation only propagates the motion information of the object and does not embody how the HOI-aware information is conveyed in the video sequence.
3. Insufficient experiments. This paper only demonstrates HOI awareness in single-frame image restoration, and in the video aspect, either the scene with little change in hand poses or the scene with little change in object characteristics is demonstrated, but for the complex video sequences with large changes in object characteristics and the need for dynamic changes in hand pose, the paper does not give the relevant effects to demonstrate.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1.line 174, this paper suggests that data augmentation techniques such as flipping, rotating, perspective transformations, etc. are performed due to the difference in size and pose of the target and reference objects, so why not use scaling techniques to bridge the gap between large and small objects?
2.Line205, this paper proposes to apply the largest bounding box as mask to the whole video sequence, so for the case of a relatively large range of object movement, will a larger mask range lead to the proportion of sampling points that does not truly reflect the proportion of extracted object features and motion information?(For example, if the target object only occupies 1/4 size of the mask in a certain frame, then a uniform sampling ratio of 50% is also difficult to extract the object motion information).
3. Why not continue the use of DINO for the first stage network instead of switching to CLIP in the second stage network?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer NtRs for the helpful comments and for providing thoughtful feedback on our work.
---
**1. Limited innovation**
> The methodology of this paper is a combination of existing work
We respectfully disagree. As acknowledged by three other reviewers, our paper presents both a novel task and approach. Reviewer ih2o noted that “The HOI-Swap method is novel and addresses a yet unexplored problem”, reviewer gbKB mentioned it “presents a novel approach”, and reviewer rdgi recognized it as addressing “a novel task: HOI video editing”.
***Clarifications of innovations.*** In the related works section, we thoroughly discuss existing approaches and outline their differences with us. Our experimental results (Ln 316-321) further demonstrate that the leading generative methods fall short in producing satisfactory edits within HOI scenarios, thus underlining the importance and necessity of HOI-Swap.
***Innovation highlights.*** (i) Problem Space: We investigate HOI-aware object swapping in videos, an area where current leading generative models falter (see Figure 4 of the main paper). This problem space has not been previously explored, and our work serves to fill this gap in the field. (ii) Technical Contributions: Our proposed editing framework simplifies complexity by dividing the task as two stages. The first stage introduces HOI awareness and spatial alignment—elements notably missing in existing image editing methods. The second stage offers controllable motion alignment with the original video, utilizing motion points and warping techniques; this approach stands in stark contrast to existing video editing approaches that enforce 100% motion alignment with shape-based conditional signals.
We would appreciate it if reviewer NtRs could identify any potentially overlooked references, allowing us to further clarify and set apart our contributions.
---
**2. Clarification on HOI awareness**
> The second stage does not embody HOI interaction awareness. The single-frame propagation only propagates the motion information of the object and does not embody how the HOI-aware information is conveyed in the video sequence.
We appreciate your comment and clarify below. HOI-awareness is a spatial property that can be reliably captured in a static frame and generally remains consistent over time. In our study, we observe that across the four datasets we use, HOI interactions are stable throughout the sequence. As an intuitive example, when swapping a hand-held mug, one HOI frame suffices to conceptualize how a new object would replace the original, including any necessary adjustments in hand grasp patterns; we then propage this edit to the remaining frames using our controllable motion guidance techniques.
Our two-stage design is driven by this principle. We recognize the reviewer's concern regarding scenarios where an object may undergo multiple distinct hand interactions within a short clip, though these are uncommon (based on our observation across 4 datasets). As noted in Ln 741-745, we view our work as an initial step towards HOI-aware video editing challenges, and plan to explore more complex video sequences in future work.
---
**3. Complex video sequences with large object changes**
> Either the scene with little change in hand poses or the scene with little change in object characteristics is demonstrated, but for the complex video sequences with large changes in object characteristics and the need for dynamic changes in hand pose, the paper does not give the relevant effects to demonstrate
Through various qualitative generation results in our paper and Supp. video, we show that HOI-Swap can adeptly address diverse object swaps in videos. For instance, Figure 1 of the main paper illustrates the replacement of a kettle with a bottle and a bowl—objects that greatly differ in shape and appearance from the original kettle. Figure R2 of the rebuttal PDF (first and last row) provides further evidence of its editing capabilities for differently-shaped objects.
Moreover, our Supp. video includes scenarios featuring various object and action variations. Object variation examples include swapping a bottle with a mug (page 5), a bowl with a kettle (page 5) and a trash can with another differently-shaped one (page 9-10). For action variations, examples include closing a laptop display (page 4), picking up a scissor (page 4), uprighting a tilted bottle (page 5), pushing and rotating toy car (page 6), and closing trash can lid (page 9-10).
We invite the reviewer to provide more details on what they consider “complex video sequences with large object changes,” so we can address this aspect more thoroughly. Lastly, as noted in Ln 316-321, HOI-aware video editing is a very challenging problem, and our experiments reveal that even leading video generation approaches struggle with the scenarios we have tested. We acknowledge in our limitations statement (Ln 744-745) that improving HOI-Swap’s capabilities to handle longer video sequences with more intricate HOIs is an important future direction.
---
**4. Clarification on scaling**
> Why not use scaling techniques…?
We indeed incorporate scaling as our data augmentation by randomly resizing the reference object (Ln 644-646 of Supp.). We'll ensure to clearly mention this in Ln 174 of the main paper.
---
**5. Clarification on bounding box**
The largest bounding box is used only for preparing the masked input sequence (Ln 205) to prevent exposing the source object to the model, while sampling points sparsity is decided by the conditioning frame’s bounding box (Ln 222). We'll clarify that these are separate processes in the main paper.
---
**6. Motivation of DINO vs. CLIP encoders**
Please refer to our general response.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing a rebuttal. The authors' replies on scaling as a data augment technique and mask region selection as well as additional experiments evaluating CLIP vs DINO as an encoder for stage 2 answered my confusion.
1) Novelty. I agree that this paper primarily focuses on a new task: video HOI editing. However, when it comes to the more significant aspect of HOI awareness, I still believe that there isn't much novel design.
2) Dynamic HOI Awareness. As written in the rebuttal, HOI-Swap lacks the ability to handle long video sequences with more complex HOI variations, which is an important direction for video editing.
3) Object Shape Variations. I am more interested in examples where the hand-object interaction pose varies greatly due to different categories. I agree with the validity of replacing the kettle with a bottle and a bowl-like object in Fig. 1 of the main paper, but the poses of the grasped objects are similar.
I agree with Reviewer ih2o that this paper should clarify its downstream scenarios and further explain its capability boundaries.
Considering that the authors solved some of my confusion and the paper does propose a new task, I raised the score to Borderline Accept. | Summary: The work considers the task of swapping the objects in ego-centric short clips with hand-object interactions. The manuscript claims to introduce this sub-task within the field of generative video editing. Concretely, HOI-Swap starts from RGB video, object area bounding box, and an image of the target object, and generates a new video, transferring hand movement to the new object, possibly adapting to the changing functionality of the object. The method has two stages: inpainting a single frame, and extending it to the whole sequence ensuring consistency. HOI-Swap is thoroughly evaluated against recent video and image editing methods, also the work demonstrates qualitative examples of generalization to novel datasets.
Strengths: 1) The HOI-Swap method is novel and addresses a yet unexplored problem;
2) The proposed method is technically sound. It employs several techniques to ensure robustness and generalization of the resulting model: reference object augmentation technique to ensure robustness to viewpoint orientation, robust masking (square mask, consistent across frames to prevent overfitting), and randomized selection of the anchor frame.
3) Another notable feature of the method is its controllability through motion guidance. This original idea enables to variation of the amount of preserved information from the source object's motion to the target, thus allowing the method to adapt to the changing functionality of the objects.
4) The manuscript is well-written and easy to follow.
Weaknesses: A minor weak point is the lack of detailed discussion on downstream applications of the model. The Introduction section touches on this topic, however, an example of possible demos or a deeper discussion of possible usages of the model could help to better motivate the importance of the introduced task and the proposed method.
Another possible area for improvement is automatic input mask annotation. Currently, the method requires a bounding box of the object as an input, however, with current advancements in the segmentation methods this requirement can be alleviated (or at least quantitatively compared to the GT bbox usage).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Stages 1 and 2 employ different image encoders (DINO and CLIP), what is the motivation for this choice?
2) Lines 212-220 discuss the ability to vary the number of control points sampled for motion guidance. What is the default value used for quantitative evaluations?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A detailed discussion of limitations is present in the Appendix with complementary examples in a supplementary video.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer ih2o for the helpful comments and for providing thoughtful feedback on our work.
---
**1. Downstream applications**
***(i) Entertainment.*** As showcased in Figure 1 of the main paper and Supp. video (pages 2-3), HOI-Swap can be applied in scenarios where object modification is required without reshooting footage. For example, in advertising, there may be situations where a pre-recorded video needs to adapt to new sponsorship requirements by replacing a soda can in the video with a water bottle. HOI-Swap offers a practical tool to seamlessly swap the objects, while maintaining realistic hand interaction patterns that adapt to the new object's shape and affordance.
***(ii) Data augmentation in robotics.*** There is an increasing trend of teaching robots through videos of humans performing tasks (Ln 24-26). By recording just a single video of picking up a mug, HOI-Swap can generate multiple variations of this video with different objects (e.g., bottles, bowls, kettles), all following the same motion trajectory. This capability can greatly reduce the need for extensive data collection in robotics.
---
**2. Automatic input mask annotation**
> Currently, the method requires a bounding box of the object as an input, however, with current advancements in the segmentation methods this requirement can be alleviated (or at least quantitatively compared to the GT bbox usage).
We appreciate the suggestion. While it is generally assumed that users will provide the ground truth bounding box or segmentation mask of the object they wish to replace during inference [8, 57], we acknowledge the potential of incorporating automatic segmentation methods to reduce user effort. Following this, we applied the recently released SAM-2 to identify bounding boxes on test videos as an alternative to manually providing ground truth. This method requires just a single click inside the object in the initial frame, followed by SAM-2 automatically tracking the target object. The quantitative comparisons are presented in the table below. While there is some degradation across three HOI metrics, we believe this feature is valuable for downstream applications as it greatly eases the input requirements and improves user convenience. Note that the baseline approaches have more stringent input requirements than ours, e.g. need precise object segmentation masks, additional text prompts (see Ln 590-596). Even with the use of automatically generated masks, HOI-Swap demonstrates its great advantages over the baselines. We will add this discussion in the paper.
| Model | CLIP consistency | Motion smoothness | Contact agreement | Hand mIOU | Hand confidence |
|-------------|:----------------:|:-----------------:|:-----------------:|:---------:|:---------------:|
| Prior best | 90.5 | 97.5 | 82.4 | 61.5 | 78.4 |
| HOI-Swap (SAM-2 bbox) | 91.2 | 98.0 | 87.8 | 73.9 | 91.8 |
| HOI-Swap (GT bbox) | 91.4 | 98.0 | 89.9 | 79.0 | 96.6 |
---
**3. Motivation of DINO vs. CLIP encoders**
> Stages 1 and 2 employ different image encoders (DINO and CLIP), what is the motivation for this choice?
Please refer to our general response.
---
**4. Default sampling points sparsity for motion guidance**
The default value used for quantitative evaluations is 50%. We appreciate your pointing it out and will include this information in the paper.
---
Rebuttal Comment 1.1:
Title: Reply by Reviewer ih2o
Comment: I thank the authors for providing the rebuttal. The additional experiments (evaluation of CLIP vs DINO as an encoder for stage 2 and evaluation of using non-GT mask predicted on the fly) clarify my concerns and strengthen the work.
After considering the authors' replies and other reviews, I have decided to increase my score to accept. The work tackles a novel problem, providing a solid baseline for future research.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback and valuable support! We are pleased to confirm that we have fully addressed your concerns. We will make the necessary updates to the manuscript in light of your thoughtful suggestions. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful and constructive review of our manuscript.
Three of the four reviewers recommend accepting. We were encouraged that the reviewers found our problem and approach novel (ih2o, gbKB, rdgi), technically sound (ih2o), and that our paper pinpoints and addresses this task’s challenges effectively (rdgi, gbKB), backed up with extensive experiments (gbKB), ablations (NtRs) and helpful visualizations (gbKB).
In response to the feedback received, we provide a general response here to address a query shared by two reviewers (ih2o and NtRs), and individual responses below to specific points from each reviewer. Please refer to the attached rebuttal PDF, where supplementary figures and tables have been provided to substantiate our responses.
---
**Motivation of DINO vs. CLIP encoders**
We thank Reviewer ih2o and reviewer NtRs for raising this question. Stage I and stage II of our pipeline employ the DINO and CLIP encoders, respectively, to align with their specific objectives. Stage I, focused on swapping the reference object within a single frame, benefits from the DINO encoder due to its enhanced ability to capture “objectness” compared with the CLIP encoder. The main emphasis of stage II is to transfer motion from the source video, and a CLIP encoder is adopted to provide scene context for generating the video. Given the reviewers’ query, we additionally conduct experiments to explore the possibility of using a DINO encoder for stage II. As reported in the table below, both encoder variants perform similarly. We will add this discussion in the paper.
| | CLIP consistency | Motion smoothness | Contact agreement | Hand mIOU | Hand confidence |
|:------------------------:|:----------------:|:-----------------:|:-----------------:|:---------:|:---------------:|
| HOI-Swap (DINO encoder) | 91.2 | 98.1 | 88.9 | 79.9 | 96.2 |
| HOI-Swap (CLIP encoder) | 91.4 | 98.0 | 89.9 | 79.0 | 96.6 |
---
We would again like to thank all reviewers for their time and feedback, and we hope that our responses adequately address all concerns. Any further questions are highly welcomed.
Pdf: /pdf/5a7166839d7840b16e3d619bc7fa2f573a274a8d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning | Accept (poster) | Summary: This paper proposes CorDA, a context-oriented decomposition adaptation method that initializes LoRA adapter with different components from the weight decomposition to support two different options: knowledge-preserved adaptation and instruction-previewed adaptions. Through extensive experiments on LLaMA-2-7b and RoBERTa, the paper shows that the knowledge-preserved adaptation can better preserve performance on general knowledge task and the instruction-previewed adaptations can yield better target task performance compared to the baseline PEFT methods.
Strengths: 1. The experimental results on the instruction-previewed mode are strong. Table 2 and Table 3 demonstrate that the purposed method yields better performance when fine-tuning on targets compared with other PEFT methods and its performance is nearly on par with full finetuning.
2. The proposed method is intuitive and simple in concept. If the decoupled components capture context in the principle components decoupled, the two modes are well-motivated.
Weaknesses: 1. One major concern of this work is the lack of theoretical support, and some arguments in the paper are not well justified.
- Line 45-47: “The covariance matrix of each layer’s activation … are responsive to the task triggered to highlight different aspects of the pre-train weight.” Is there any support for this or is this a pure intuition?
- Why do you perform SVD on $WC$ ? What does $WC$ represent?
2. The knowledge-preserved mode is not very compelling given this method will hurt the target task performance and cannot achieve the best of both worlds by design. Although the authors have already recognized this when stating the limitations, I don’t know why the paper emphasizes this mode in parallel with the instruction-previewed adaptation mode. Regarding the design of method to achieve the best of both worlds, [1] may be a pointer for reference as it also studies how to preserve the forgetting of general knowledge in LMs when continual training them.
[1] Continual Pre-training of Language Models, Ke et al., ICLR 2023
**Update:** Most of points are addressed by the author response, so I update my score from 4 to 6.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does the major difference between CorDA and LoRA lie in the initialization? Have you compared with other PEFT methods that focuses on initialization technique? Currently, the baseline comparison is not very comprehensive.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, it's discussed from Line 293.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work and your valuable comments.
> (1) One major concern is the lack of theoretical support, and some arguments are not well justified. Is there any support for Line 45-47 or is this a pure intuition? Why SVD on $WC$? What does $WC$ represent?
We first highlight the significance of our work even though it is lacking a theoretical support, and then provide some supports from literature and our empirical analysis justifing Line 45-47. Finally, we explain our method of SVD on $WC$.
_1. Significance of our work even with no theory:_
Despite the lack of theory, the contribution of our work is significant for the following reasons
- **It is not necessary to use a theory to show its validity.** Activation awareness has been proven to be important in LLM compression and quantization [27, 35, 65]. But in PEFT, existing studies rarely consider task context.
So, a fundamental originality of our method is that we introduce task context such that the resulting adapter initialization is task-dependent. We provide some other supports (in _"2. Supports for Line 45-47"_) for why using covariance matrix and its advantage over only using the activation itself.
- Existing _PEFT methods_ rarely support the option of finetuning with the pre-trained knowledge better preserved (**[1] is not a PEFT method**). Since our method is task-dependent, we offer such flexibility enabling both knowledge-preserved adaptation and instruction-previewed adaptation, customized for the actual need. When knowledge maintenance is a need, the former satisfies this demand. When we only want to push the limit of the downstream tasks without concerning about the pre-trained knowledge maintenance, the latter is favorable.
- In experiments, compared with LoRA, our knowledge-preserved adaptation achieves improvements on both worlds (shown in Table 1), enjoying better results than LoRA on both world knowledge benchmarks and downstream tasks, and having the best comprehensive average performance among all the compared methods.
Our instruction-previewed adaptation is able to further improve the downstream task performance, surpassing LoRA, DoRA, and PiSSA in all the three finetuning tasks (shown in Table 2). In either case, the experimental performance is a progress in the field.
Therefore, we believe that our contributions should not be unappreciated just because our method lacks a theoretical support, which is actually a common issue in existing PEFT methods. Actually, there is no theorem in the original LoRA paper and most of the follow-up studies.
_2. Supports for Line 45-47:_
- **Support from literature**. Activation awareness has been proven to be effective in LLM compression and quantization [27, 35, 65]. OWQ [27] proposes to leverage the outlier awareness from covariance matrix to make weight quantization task-specific. ASVD [65] also considers outliers, scaling the pre-trained weight with the input activation to perform SVD for model compression. The rationality behind these studies is that the outlier in activation or covariance matrix is responsive to the task triggered by the input, which is also our motivation of using covariance matrix to orientate the decomposition.
- **Support from empirical analysis**. As suggested by Reviewer Mgdn, we visualize the covariance matrix collected from samples from the three tasks, MetaMath, NQ open, and Trivia QA. Note that we downsample the covariance matrices from a high dimension (4096 or 11008) into $32 \times 32$, and visualize their heatmaps. Please see the **PDF file of the global rebuttal** for the visualization results. It is shown that the heatmaps from NQopen and TriviaQA (both are QA tasks) share some similar patterns marked in red circles, which do not appear in the heatmap from the different task MetaMath. The visualization result further supports that covariance matrix can be used to characterize the triggered task.
- **Support from experimental results**. As stated above, ASVD and our method has a similar motivation. ASVD performs SVD on the pre-trained weights scaled by input activation for model compression. We use covariance matrix to orientate the decomposition of weights for building adapters. We compare our context-oriented SVD (CO-SVD) with ASVD in Figure 2 and Table 6 of our paper. It is shown that when discarding the smallest several components, CO-SVD is much better at maintaining the performance of WikiText-2 and PTB than ASVD and Plain SVD. This result indicates that our CO-SVD has a stronger ability in assembling task context into its principle components. Therefore, covariance matrix is better choice for us to build task-dependent adapters.
_3. Why SVD on $WC$ and what does $WC$ represent:_
As we respond above, our CO-SVD, using covariance matrix to orientate the decomposition of pre-trained weights, has a strong ability in assembling task context into the principle components. As shown in Figure 2 and Table 6, our CO-SVD, _i.e._ SVD on $WC$, is much better than the plain SVD that does not include task context, and ASVD that performs SVD on the weights scaled by activation.
So, we perform SVD on $WC$, where $W$ is the pre-trained weight and $C$ is the covariance matrix collected from a few samples. Its principle components after decomposition are task dependent and better capture task context than only using the activation itself.
> (2) The knowledge-preserved mode is not very compelling given this method will hurt the target task performance. Why the paper emphasizes this mode in parallel with the instruction-previewed adaptation mode.
>
> (3) The other questinos
We have discussion about the unique value of our knowledge-preserved mode and answer to your remaining questions in the following comment.
---
Rebuttal 2:
Title: Discussion about the value of knowledge-preserved mode and answers to remaining questions
Comment: > (2) The knowledge-preserved mode is not very compelling given this method will hurt the target task performance. I don’t know why the paper emphasizes this mode in parallel with the instruction-previewed adaptation mode. [1] may be a pointer for reference as it also studies how to preserve the forgetting of general knowledge in LMs when continual training them.
**Please note that the knowledge-preserved adaptation does NOT hurt the target task performance**. As shown in Table 1, compared with LoRA, our method has better performance than LoRA in both worlds (world knowledge benchmarks and downstream tasks) in most cases. We **quote some results of LoRA and CorDA in Table 1 here** for your reference.
|Method|Trivia QA | NQ open | WebQS | GSM8k | Math | Avg. |
|---|---|---|---|---|---|---|
|LoRA|44.17 | 1.91 |6.64| 42.68| 5.92| 20.26|
|CorDA | 44.30 | 9.36 | 7.14 | 44.58 | 6.92 | 22.46 |
|Method|Trivia QA | NQ open | WebQS | MTBench | Avg.|
|---|---|---|---|---|---|
|LoRA|47.46 |10.28| 7.73| 4.60| 17.52|
|CorDA | 50.34 |14.43| 8.17| 5.05| 19.50|
More importantly, as we respond above, existing PEFT methods rarely consider or support finetuning with knowledge better perserved. There are some studies on the continual training of LLMs [1] and [14, 21] (ref. in our paper), but they are not PEFT methods. We will cite [1] in the revised version of our paper.
As shown in Table 1, when using the average performance of the both worlds to measure the comprehensive ability, CorDA in knowledge-preserved adaptation achieves the best results among all the compared methods in all the three tasks. **Therefore, we respectfully do not agree with your comment that "knowledge-preserved adaptation is not compelling" and "it hurts the target task performance".**
Admittedly, CorDA in knowledge-preserved adaptation is not stronger than the instruction-previewed adaptation when only evaluating the downstream task performance. But please note that maintaining pre-trained knowledge better and pursuing a better finetuning performance is inherently a tradeoff (also mentioned in [14] and [21]). A method may be better than another one in both worlds, just like CorDA v.s. LoRA in the results above, but for the method itself, the two worlds are still a tradeoff. We would like to use an analogy to support. The advanced neural architecture (_e.g._ Transformer) may be better than a traditional MLP/CNN architecture in terms of both accuracy and parameter efficiency. But for the better architecture itself, higher accuracy still brings more parameters (BERT_large is better than BERT_base with more parameters).
Therefore, when knowledge maintenance is not a concern, we introduce our instruction-previewed adaptation, which puts all its efforts for the downstream task, surpassing the competitive studies DoRA and PiSSA in the three finetuning tasks, Math, Code, and instruction following, as shown in Table 2.
In conclusion, the knowledge-preserved adaptation and instruction-previewed adaptation highlight the comprehensive performance and the specialized ability of downstream task, respectively. We think this is also a feature of our PEFT method, allowing for customized selectivity based on the actual need.
That is why we emphasize the two modes in parallel.
[1] Continual Pre-training of Language Models, Ke et al., ICLR 2023
> (3) Questions: Does the major difference between CorDA and LoRA lie in the initialization? Have you compared with other PEFT methods that focuses on initialization technique? Currently, the baseline comparison is not very comprehensive.
Yes, CorDA brings task context into the LoRA adapter initialization.
Adopting the same LoRA structure not only facilitates fair comparison, but also enables to restore the orignal LLM architecture after finetuning without architectural change or introducing inference burden.
Yes, we have compared with PiSSA, which also focues on the LoRA adapter initialization but does not consider task context.
DoRA builds the adapter with a normalization and a learnable magnitude, and also does not consider task context.
It is noteworthy that both DoRA (ICML 24) and PiSSA (released on Arxiv in April 2024) are recent studies and are strong baselines. Besides, full parameter finetuning is the most direct reference because it usually has the best finetuning performance without considering parameter efficiency.
For downstream tasks, as shown in Table 2 and Table 3, our method achieves finetuning performances on par with full parameter finetuning, and better performances than the compared PEFT methods LoRA, DoRA, and PiSSA.
For comprehensive ability (with knowledge benchmarks included), as shown in Table 1, our method has the best average performance among full finetuning and the PEFT methods.
Therefore, our experimental results are already able to demonstrate the effectiveness of our proposed method.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for the detailed response.
The first part of the response adequately addresses my Weakness 1. However, it's worth noticing that I am not saying every paper necessarily needs to have a theoretical part. Solid, inspiring empirical results are very important for AI/ML application. The reason why I raised Weakness 1 is the current writing of Section 3.2 gives people an impression that the paper states that the covariance matrix is tightly connected with task context/patterns, where the latter concept itself does not have a formal definition (it's really hard to articulate what task context is). I suggest weakening the argument here and discussing the empirical comparison more, as it's the empirical results that show decomposing $WC$ could lead to a better results.
Overall, the author response addresses my major concern about the method proposed in this paper, so I increase my score accordingly.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer kduk,
Thank you for increasing the score and your suggestions. We will rephrase the argument, and include more discussion about the empirical comparison in the revised paper.
Authors | Summary: The paper proposes a context-oriented decomposition adaptation method for large language models (LLMs) called CorDA. This method constructs learnable adapters by considering the context of downstream tasks or world knowledge. It can bridge the performance gap between parameter-efficient fine-tuning (PEFT) and full-parameter fine-tuning, while also mitigate the problem of catastrophic forgetting. Specifically, the method decomposes the weights of LLMs using singular value decomposition (SVD) guided by the covariance matrix of input activations. Through this approach, the model can identify important weights related to specific contexts, thereby creating context-aware adapters. These adapters can be customized for different tasks, enabling the model to retain general knowledge while enhancing performance on specific tasks.
Strengths: S1: The structure of the paper is relatively clear. The introduction progressively leads readers to understand the current state of fine-tuning LLMs and the challenges posed by knowledge forgetting after fine-tuning. This approach allows readers to quickly grasp the current situation of LLMs fine-tuning. In the subsequent method section, related modules are introduced based on these challenges.
S2:The main challenge addressed in this paper is the catastrophic forgetting of knowledge when fine-tuning LLMs. To tackle this, the authors propose a context-oriented decomposition method and introduce two modules. The first module generates an adapter that retains the model’s general knowledge by activating the covariance matrix using other question-answering datasets. The second module, designed to adapt to task-specific instructions, is called the instruction preview adaptation module, which generates adapters specific to the given tasks. The paper thoroughly explains the implementation process of the method, including the calculation of the covariance matrix and the application of SVD.
S3:The experimental validation in this paper is comprehensive, verifying the effectiveness of each module. The results of the modules are thoroughly analyzed through extensive experiments, demonstrating the effectiveness of the proposed modules.
Weaknesses: S1:In the experimental section, it is necessary to add a description of the experimental environment and provide parameters to offer readers more possibilities for replication and ensure authenticity. Additionally, the paper lacks specific quantification of catastrophic forgetting in the model.
Technical Quality: 3
Clarity: 2
Questions for Authors: none
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work and your valuable comments.
> (1) It is necessary to add a description of the experimental environment and provide parameters to offer readers more possibilities for replication and ensure authenticity. Lacks specific quantification of catastrophic forgetting in the model.
We have provided the implementation details in our main paper and Appendix A. Concretely, we describe the data choice and number to collect covariance matrix, the rank choice, and datasets used in the experiment section of our paper. In Appendix A, we provide the training settings (optimizer, batchsize, learning rate, etc), the GPU device we use, and the evaluation tools. Our code and models will be publicly available. Please let us know if the reviewer has any question about our experimental details.
Usually the average performance and the performance drop compared with the model before training/finetuning are the common quantification metrics to evaluatuate catastrophic forgetting. We have shown the average performance over world knowledge benchmarks and downstream tasks in Table 1.
It can reflect the comprehensive ability of the finetuned model in maintaining the pre-trained knowledge while learning the new task.
Besides, for the world knowledge benchmarks (Trivia QA, NQ open, and WebQS columns), the performance difference between the second row (LLaMA-2-7B) and the rows below shows the performance drops of each finetuning method compared with the original LLaMA-2-7B model. It is shown that our method enjoys the lowest performance drop in most cases. We will mark the performance drop beside these numbers to show the advantage more explicitly.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. After going through these responses, I decide to manintain my score. | Summary: This paper introduces a new parameter-efficient fine-tuning (peft) method called Context-oriented Decomposition Adaptation (CorDA). Although fundamentally similar to LoRA, it differs in that it initializes two low-rank matrices using the SVD results of pre-trained weights reflecting the data context. To incorporate the data context, few samples specific to the task are required, utilizing the input activations of these samples for the large language model (LLM). CorDA can be applied in two ways: knowledge-preserved adaptation and instruction-previewed adaptation. The former aims to retain specific knowledge during fine-tuning, while the latter is used to adapt the LLM more effectively to the desired instructions.
Strengths: - **Originality**: The approach of modifying the LoRA method to maintain or enhance the performance of language models on specific data types appears to be a novel attempt. This method has not been explored in existing works dealing with parameter-efficient fine-tuning, thus offering originality.
- **Quality**: Experimental results demonstrate that CorDA is more effective than existing methods like LoRA or PiSSA in fine-tuning models such as Llama and RoBERTa.
- **Significance**: The idea of using data to derive effective low-rank matrices could inspire future research.
Weaknesses: - **Originality**: There are methodological similarities with PiSSA, and comparisons with existing works like AdaLoRA [1] are lacking.
- **Quality**: The exact role of the covariance matrix $C$ is not clearly explained. The paper states in lines 145-147 that “the covariance matrix of each layer’s activation will exhibit different outlier patterns as they are responsive to the task triggered to highlight **different aspects** of the pre-trained weight.” Providing visualizations to show how data from different tasks trigger different parts of the weight would enhance clarity.
- **Clarity**: The amount and type of data used to derive the covariance matrix $C$ significantly impact performance, but the paper lacks a clear discussion and analysis on this. Moreover, there is no guidance on how to choose $r$ in Equations 4 and 5, which appears to be a crucial hyperparameter. This omission makes it challenging to practically apply the proposed method based solely on the information provided in the paper.
- **Significance**: Although the research seems advantageous compared to existing parameter-efficient fine-tuning methods, it is unclear how much and what quality of data is needed to derive $C$ for better performance. The usability of the CorDA heavily depends on the amount and quality of data required. Despite this, the concept of using data to derive better low-rank matrices is novel.
Reference
[1] AdaLoRA: Adaptive Budget Allocation For Parameter-efficient Fine-tuning
Technical Quality: 3
Clarity: 2
Questions for Authors: - It would be beneficial to include visualizations of the covariance matrix derived from different data.
- The paper should incorporate a data selection strategy for $C$ and an extensive analysis on $r$.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations in Section 4.4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work and your valuable comments.
> (1) Originality: methodological similarities with PiSSA, and comparisons with existing works like AdaLoRA
_Similarity with PiSSA:_
Both our method and PiSSA adopt SVD for the pre-trained weights, however, our method has fundamental difference and advantage compared with PiSSA as follows:
- Activation awareness has been proven to be important in LLM compression and quantization. But in PEFT, existing studies rarely consider task context. PiSSA provides a better adapter initialization than LoRA using SVD, but still does not consider any task context. **Although our method also adopts SVD, we never claim that the novelty of our method lies in the usage of SVD to build LoRA adapters. The fundamental originality is that we introduce task context, captured by the covariance matrix from activations, to orientate the decomposition of weights, such that the resulting adapter initialization is task-dependent**.
- Existing PEFT methods including PiSSA rarely support the option of finetuning with pre-trained knowledge better preserved. Since our method is task-dependent, we offer such flexibility enabling both knowledge-preserved adaptation and instruction-previewed adaptation, customized for the actual need.
- Even when the maintenance of pre-trained knowledge is not a concern, our method in instruction-previewed adaptation surpasses DoRA and PiSSA in the three downstream tasks of Math, Coding, and instruction following, as shown in Table 2 of our paper. It is noteworthy that both DoRA (ICML24) and PiSSA (arXiv:2404.02948, online in April 2024, within 2 months of our submission) are the latest studies, and PiSSA actually could be considered as contemporaneous.
Therefore, we believe that **our originality and contributions, including the methodology of introducing task context into LoRA adapter initialization, and the experimental performance of both maintaining pre-trained world knowledge and improving downstream task ability, should NOT be unappreciated just because we also adopt SVD in the adapter initialization process**.
_Comparison with AdaLoRA:_
Our method improves the LoRA adapter initialization by introducing task context, and adopts the low intrinsic dimension $r$ following the standard setting in LoRA.
AdaLoRA aims to dynamically adjust $r$ during finetuning. So, CorDA and AdaLoRA are methods focusing on different aspects (how to better build adapters and how to dynamically adjust the rank).
In our experiments, we mainly compare CorDA with methods of the same stream, PiSSA and DoRA. PiSSA also focuses on the LoRA adapter initialization using SVD, but does not consider task context. DoRA builds the adapter with a normalization and a learnable magnitude, and also does not consider task context. It is noteworthy that both DoRA (ICML 24) and PiSSA (released on Arxiv in April 2024) are recent studies and are strong baselines. Besides, full parameter finetuning is the most direct reference because it usually has the best finetuning performance without considering parameter efficiency. For downstream tasks, as shown in Table 2 and Table 3, our method achieves finetuning performances on par with full parameter finetuning, and better performances than the compared PEFT methods LoRA, DoRA, and PiSSA. For comprehensive ability (with knowledge benchmarks included), as shown in Table 1, our method has the best average performance among full parameter finetuning and the PEFT methods. Therefore, our experimental results are already able to demonstrate the effectiveness of our proposed method.
> (2) Quality: The exact role of the covariance matrix $C$ is not clearly explained. The paper states that “the covariance matrix of each layer’s activation will exhibit different outlier patterns as they are responsive to the task triggered to highlight different aspects of the pre-trained weight.” Providing visualizations to show how data from different tasks trigger different parts of the weight would enhance clarity.
When the inputs of different tasks are fed into an LLM, the covariance matrix from activations will exhibit different patterns. We use such patterns to orientate the decomposition of LLM pretrained weights, to make the resulting adapter initialization task-dependent.
We did not provide the visualization of the covariance matrix because the dimension in 4096 or 11008 is too large to be informative. In the rebuttal, we downsample the covariance matrices into 32 $\times$ 32 and visualize their heatmaps.
Please refer to our response in the **global rebuttal and its PDF attached**.
It is shown that the heatmaps from NQopen and TriviaQA share some similar patterns (marked in red circles), which do not appear in the one from the different task MetaMath.
We hope this result can further justify that the covariance matrix patterns can be used to characterize the triggered task.
Besides, we can find some supports from the literature. OWQ [27] proposes outlier-aware weight quantization based on covariance matrix. ASVD [65] performs SVD considering activations for compression. But as shown in Figure 2 and Table 6, our CO-SVD based on covariance matrix instead of only the activation itself, is much better at capturing the task context.
We will add these visualization results and more discussion about the literature support in the revised version of our paper.
> (3) Clarity and Significance: "The amount and type of data used to derive the covariance matrix $C$ significantly impact performance, but the paper lacks a clear discussion and analysis on this. Moreover, there is no guidance on how to choose $r$ in Equations 4 and 5, which appears to be a crucial hyperparameter."
**We address your concern about the impact of data amount and type to derive $C$ on performance and how to choose $r$ in the global rebuttal. Please refer to the global rebuttal-> common issue.**
---
Rebuttal Comment 1.1:
Comment: Thank the author for the clarifications. The response effectively addresses my concerns, clearly highlighting the key contributions of this work in comparison to PiSSA, as well as the different patterns of covariance matrices across tasks. Therefore, I have increased the score accordingly. | Summary: The paper proposes an initializes algorithm for LoRA based fine-tuning, with the aim to maintain world knowledge and also improve training performance on the fine-tuning task at hand. The authors carefully explain the reasoning behind their initialization scheme, and conduct extensive studies to showcase the efficacy of their framework. Furthermore, they show that they can get surprising improvements on MT bench, simply by maintaining knowledge information. Overall, the paper presents a systematic approach to initializing LoRA parameters, and the general principle can guide the community towards better training of large language models.
Strengths: The strength of the paper lies in its simplistic exposition of its motivation, proposed framework, and the extensive experimental study to show the efficacy of the initialization framework. The authors begin with a clean exposition to SVD, and how one can think of the different eigenvectors in the input weight covariance matrix. With a very intuitive discussion, the authors show the directions along which the weight parameters aren't perturbed during training to maintain world knowledge. The directions are selected by looking at the covariance matrix of knowledge-test datasets like TriviaQA, and NaturalQA. Furthermore, the authors indicate the directions along which they want to pre-capture features of the fine-tuning task at hand, which can give bigger returns compared to training LoRA parameters from scratch. Overall, the principle of data-dependent initialization is novel and can guide the community towards more systematic training designs.
Weaknesses: Overall, I don't see much weakness with the work. I have a few questions regarding the experimental framework, which I would like to discuss during the rebuttal period.
- How did the authors decide to maintain the last $r$ eigenvectors for world-knowledge? I believe, the authors should cite relevant works or make proper justification on this design choice.
- How scalable is their proposed approach to other datasets where we may want to maintain knowledge? That is, how likely is the model to maintain knowledge on TriviaQA, if it wasn't included during LoRA initialization?
- What is the necessary sample complexity for initializing the LoRA parameters, i.e. how many samples from the knowledge datasets are necessary to get a good estimate of the directions to freeze?
- Is the knowledge preservation necessary at each weight parameters and each layer of the model? How do results change when the model's LoRA parameters are restricted only in the lower layers, while the higher layers are given more freedom during fine-tuning?
Technical Quality: 4
Clarity: 4
Questions for Authors: Please check my questions above.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitations of their work in section 4.4, and clearly indicate the future directions that the community can pursue starting from their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work and your valuable comments.
> (1) How did the authors decide to maintain the last $r$ eigenvectors for world-knowledge? Cite relevant works or make proper justification on this design choice.
As shown in Figure 1 and Eq. (4), in the knowledge preserved adaptation, we use the last $r$ eigenvectors to build adapters, instead of maintaining them. We explain the design choice in more details as follows.
Our context-oriented SVD (CO-SVD) decomposes the pre-trained weights into orthogonal abilities where the largest principle components most correspond to the task context captured by the covariance matrix. In instruction previewed adaptation (IPA), we want to better learn the target ability, so we use the first $r$ eigenvectors to build adapters. In knowledge preserved adaptation (KPA), the first components correspond to the QA ability, which is what we want to maintain. So, we use the last $r$ eigenvectors to build adapters while freezing the first ones.
Therefore, the design motivations of IPA and KPA are not opposite. The principle components after CO-SVD both represent the ability indicated by the covariance matrix. The difference is a result of the purpose, _i.e._ better adapt these components (used as adapters in IPA) or better maintain these components (frozen in KPA).
We thank the reviewer for the suggestion of citing relevant works to justify the design choice. Actually, ASVD [65] is just based on a similar design motivation. They discard the last eigenvectors and only maintain the largest several components for model compression. In our KPA, we also maintain the principle components to preserve world knowledge, but adapt the last $r$ eigenvectors to learn new ability instead. Despite a similar motivation, we have shown that our CO-SVD is much better at capturing the characteristics of the target task than ASVD in Figure 2 and Table 6. Previous stuides [A, B, C] also adopt SVD for model compression. We will cite these works and explain the design choice of KPA and IPA more thoroughly in the revised paper.
[A] Language model compression with weighted low-rank factorization, ICLR'22.
[B] Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation, NeurIPS'14.
[C] GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking, NeurIPS'18.
> (2) How scalable is their proposed approach to other datasets where we may want to maintain knowledge? How likely is the model to maintain knowledge on TriviaQA, if it wasn't included during LoRA initialization?
Our method guides the decomposition of pretrained weights by the data-dependent covariance matrices. However, the knowledge to preserve is definitely **NOT** constrained to the data used for adapter initialization. Actually, we only sample 256 questions from a QA dataset to collect the covariance matrices. The ability of our method to preserve world knowledge cannot be derived from these limited samples themselves. That is to say, what we maintain is the QA ability, instead of some specific knowledge from the collected data.
We use the covariance matrices, whose outlier patterns characterize the target task, to make the decomposition task-specific. Therefore, the same kinds of input/query (e.g. questions from TriviaQA and NQopen) have a similar effect because they both trigger a similar ability.
We have conducted analysis in Table 4 (TriviaQA and NQopen) and Table 5 (WizardLM-Evol-Instruct and Alpaca) comparing results with data from similar tasks.
Please refer to our response in **the global rebuttal -> common issue -> (2) Type/quality.**
These results indicate that randomly collecting context from one dataset has a scalable effect to other datasets of the same task.
> (3) What is the necessary sample complexity for initializing the LoRA parameters, i.e. how many samples from the knowledge datasets are necessary to get a good estimate of the directions to freeze?
As described in Line 233 of our paper, we sample 256 questions for our experiments of knowledge preserved adaptation.
We collect 256 samples in all our experiments for both KPA and IPA modes.
In Table 6, we analyze the effect of sample number. We compare the results of collecting 32 and 256 samples for Wikitext-2 and PTB, respectively. In order to also investigate the effect of sample number in the KPA experiment, we collect less samples (128 and 32) from NQopen. Please refer to our response and result in **the global rebuttal -> common issue -> (1) Amount.**
> (4) Is knowledge preservation necessary at each weight parameters and each layer? How do results change when the model's LoRA parameters are restricted only in the lower layers, while the higher layers are given more freedom?
We follow the standard setting in LoRA, DoRA, and PiSSA, _i.e._, evenly using the same low rank for all linear layers. It also facilitates fair comparison with them.
Since large deviation of lower layer weights will go through more later layers whose accumulative effect will cause a large shift for the final representation, restricting the adaptation of lower layers and finetuning higher layers with more freedom indeed may be preferable to maintain the pre-trained knowledge.
Thus we can use a small intrinsic rank ($r$) for lower layers to constrain their adaptation, and adopt a large $r$ and even full finetuning for higher layers to learn downstream tasks.
We can also adopt an adaptive strategy, _e.g._ based on the eigenvalue distribution after our CO-SVD, to assign ranks.
These strategies may be more parameter-efficient than using the same rank for all layers.
Besides, Transformer different weights and MLP blocks may also play different roles in maintaining pre-trained knowledge. Investigating the importance of different weight parameters in preserving knowledge and assigning adapters accordingly will be a valuable extension of CorDA and deserve our future exploration.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. After going through the response, I am maintaining my score. | Rebuttal 1:
Rebuttal: We thank AC and all reviewers for reviewing our submission and recognizing the contributions of our work. We are grateful for the valuable comments and suggestions.
### 1. Response to each review
For each review, we address the major question/concern in the rebuttal. We leave discussions and answers to minor questions in a comment appended.
### 2. Covariance matrix visualization
As suggested by Reviewer Mgdn, we provide the visualization results of the covariance matrices, collected from the three tasks MetaMath, NQ open, and Trivia QA, in **the PDF attached.** Please zoom in for a better view.
Since the original dimension in 4096 or 11008 will be too large to be informative, we downsample the covariance matrices into 32 $\times$ 32 and visualize their heatmaps. We provide the results from the activations before all the weights including ``self_attn.k_proj`` (the same as ``q_proj`` and ``v_proj`` due to the same input), ``self_attn.o_proj``, ``mlp.down_proj``, and ``mlp.gate_proj`` (the same as ``mlp.up_proj``) in the first layer, and the ``self_attn.o_proj`` weight in later layers.
We use red circles to mark the similar patterns, which the heatmaps from NQopen and TriviaQA share but do not appear in the one from the different task MetaMath.
The visualization result empirically supports that the covariance matrix patterns can be used to characterize the triggered task.
### 3. Common issue by Reviewer Mgdn and Reviewer 527n:
Reviewer Mgdn concerns about the impact of data amount and type to derive $C$ and how to choose $r$. Reviewer 527n has questions about scalability to other datasets and sample complexity.
- _Amount and type of data_:
**We respectfully disagree with the comment that "The amount and type/quality of data used to derive the covariance matrix $C$ significantly impact performance/usability of CorDA".
Actually, we have conducted an analysis in our paper about the samples used to collect covariance matrix.**
(1) Amount
In Table 6, we analyze the impact of sample number. We compare the results of collecting 32 and 256 samples for Wikitext-2 and PTB, respectively. In both Wikitext-2 and PTB, we observe a slight advantage of using 256 samples only when discarding the smallest 1024 ranks (6.35 v.s. 6.62 for Wikitext-2 and 22.28 v.s. 22.68 for PTB). When discarding the smallest 0-512 ranks, the results of using 32 and 256 samples are very similar and are much better than Plain SVD and ASVD. Therefore, only a few samples are enough to capture the task context by the covariance matrix. In order to also investigate the effect of sample number in the knowledge-preserved adaptation (KPA), we collect less samples (128 and 32) from NQopen and compare the resutls as follows.
|Samples|Trivia QA|NQ open|WebQS|GSM8k|Math|Avg|
|---|---|---|---|---|---|---|
| 256|44.30 |9.36 |7.14 |44.58 |6.92 |22.46|
| 128|44.53 |9.15 |7.16 |44.79 |6.85 |22.50|
| 32|44.11 |9.30 |6.94 |44.70 |6.93 |22.39|
Similarly, the sample number choice within [32, 256] does not cause a huge performance deviation. **Collecting covariance matrices from only a few samples are enough to implement our method.**
(2) Type/quality
The inputs/queries from a close task (e.g. questions from TriviaQA and NQopen) have a similar effect because they both trigger a similar ability. We have conducted analysis in our experiments. For the KPA mode, in Table 4 of our paper, the last two rows indicate that collecting data from TriviaQA and NQopen has a close performance on the QA benchmarks and the finetuning task. They are both much better than Plain SVD without considering any task context. For the IPA mode, as shown in Table 5, we collect covariance matrices from WizardLM-Evol-Instruct and Alpaca to build adapters, respectively, and finetune them on instruction following. They also lead to similar results (5.15 and 5.06) on MTBench, which are both better than the full finetuning, LoRA, and PiSSA results listed in Table 2. These results indicate that **randomly collecting context from one dataset has a scalable effect to other datasets of the same task.**
Moreover, in Table 4, we show the standard deviation of the results run with different seeds. It is shown that **randomly sampling data does not cause a large performance deviation**, which implies that it is not necessary to specially check the data quality.
Indeed introducing a data selection strategy when collecting the covariance matrix may bring further improvement, but we believe that this is an interesting extension of CorDA deserving our future exploration.
**Therefore, the effectiveness of our method is NOT sensitive to the data selection and does NOT rely on a large amount of data or specific type/quality.**
- _How to choose $r$:_
**Please note that $r$ is NOT a hyper-parameter introduced by our method.** It is just the low intrinsic dimension of the LoRA adapter. A higher $r$ has a better finetuning performance with more trainable parameters. A lower $r$ is more parameter-efficient but has lower performance.
The goal of our study is to introduce task context into the process of adapter initialization, and accordingly enable parameter-efficient finetuning with better world knowledge maintenance or stronger finetuning performance.
Therefore, **we follow the standard setting in LoRA, DoRA, and PiSSA**, _e.g._, usually setting $r=128$ for all linear layers.
It also facilitates fair comparison with these methods. In our Figure 3 of our paper, we compare CorDA with full finetuning, LoRA, and PiSSA in both $r=128$ and $r=32$, to show the effectiveness of our method in different scenarios.
Therefore, we do not need to provide any guidance on how to choose $r$. Just the same as LoRA, DoRA, and PiSSA, $r$ is a configuration to control the tradeoff between efficiency and performance, determined by user's preference.
Pdf: /pdf/f666d7b0f2ff7e672e7493c9f26ba056a03b01eb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Decision Sparsity | Accept (poster) | Summary: The paper extends the notion of decision sparsity called the sparse explanation value (SEV). Cluster-based and tree-based SEV are introduced, as well as some algorithms to optimise the decision sparsity are considered. The core of the paper -- SEV -- is defined as the number of factors that need to be changed to a reference values in order to change the decision.
Strengths: The problem considered is important for real (e.g., medical, criminology applications). The example (Table 2) is very helpful to understand the problem. The approach is mathematically sound and in general well-described.
Weaknesses: The method is promising but its current version is hardly scalable.
Technical Quality: 4
Clarity: 3
Questions for Authors: Line 85: "humans have no intuition for why a point belongs to one class or the other". I cannot completely agree with the statement, and it would be helpful to provide some examples. For me, on the contrary, medical doctors have often an intuition how to classify a patient, however, they do not always know how to explain their intuition.
How the sparse explanation for the sample x (mentioned on line 171) is formally defined? Is the number of features of x (original) and x (sparse) different?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The method is not scalable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your review! We really appreciate it! See below for our response to your questions and concerns.
> Line 85: "humans have no intuition for why a point belongs to one class or the other". I cannot completely agree with the statement, and it would be helpful to provide some examples. For me, on the contrary, medical doctors often have an intuition how to classify a patient, however, they do not always know how to explain their intuition.
There’s a good example from Figure 3 in the paper of Keane [1] where he shows images on either side of the decision boundary that look visibly identical (we have provided the figure in the attached pdf file). There’s just no intuition why the point belongs to one class or another. We agree with you that for some decisions that, particularly where there are discrete variables, the decision boundary could be clearer. We’ll adjust the wording of that sentence. However, even for medical decisions by doctors, it’s not also clear how to diagnose someone, which is why AI can be helpful. (We have multiple projects on exactly this topic.)
[1] Delaney, Eoin, et al. "Counterfactual explanations for misclassified images: How human and machine explanations differ." Artificial Intelligence 324 (2023): 103995.
> How the sparse explanation for the sample x (mentioned on line 171) is formally defined? Is the number of features of x (original) and x (sparse) different?
The number of features of the original query x and the sparse explanation x are the same. The sparse explanation for a specific query is defined as a p-dimensional vector that has changed the smallest number of features from its query value to the reference value to flip the prediction, while the other features remain unchanged. Four examples are shown in Table 1. The explanations (lower rows) are the same size as the Query (top row). The gray colored values are the unchanged query values, while the black ones are changed. This is the same type of definition as other counterfactual explanation methods.
> The current version is hardly scalable.
SEV measures decision sparsity, so it is calculated for each point separately, therefore in practice, when we provide an SEV for one loan applicant at a time, the calculation is very fast for that applicant (fraction of a second to a few seconds, more detailed calculation time is shown in Table 7-9 in Appendix H). If you want to calculate SEV for all points at once for a huge dataset, like in the experiments of our paper, it can be computed in parallel for each point.
If you want to compute it for each point and for a large number of features, we could apply the gradient-based method from our paper first in order to reduce the mean SEV of the model to be close to 1 without sacrificing the model performance. This could speed up SEV calculation for all points.
Thank you so much once again for your review!
---
Rebuttal Comment 1.1:
Comment: I acknowledge the rebuttal. Thank you for the detailed answer. | Summary: The authors build on top of the Sparse Explanation Value approach by
Sun et al and provide improvements in terms of closeness and
credibility.
Strengths: Sensible problem, well presented solution.
Weaknesses: The main limitation I can see in the work is its very incremental nature with respect to the approach by Syn et al.: going from one negative reference point to a set by clustering negatives (cluster-based SEV) is a trivial extension, while tree-based SEV only works if the underlying model is (or can be approximated as) a decision tree, substantially restricting the applicability of the approach.
Additionally, the superiority of sparsity with respect to distance in
terms of acceptability for humans is intuitive but not always
guaranteed. When dealing with recourse, for instance, which is the
setting used in the experiments, the main problem is the cost of the
change, and slightly modifying two features could be less
expensive than modifying a single one by a larger value. This should
be better discussed in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: How can you turn a DT leaf into a reference point when dealing with
continuous features? what is the actual value of the continuous
variables in the leaf?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The approach is very incremental, and the main extension substantially
restricts the applicability of the method.
The rebuttal of the authors did shade additional light on the novelty of the contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your review! We really appreciate it! See below for our response to your questions and concerns.
> The work is its very incremental nature with respect to the approach by Sun et al. cluster-based SEV is a trivial extension, while tree-based SEV only works if the underlying model is a decision tree, substantially restricting the applicability of the approach.
In terms of your question about the contribution of our paper, we understand that some of the ideas seem trivial (going from one negative reference to a set), and that you think the extension to decision trees is not broadly useful because it applies only to trees. Indeed the idea of going from one reference to many is easy. Working out how to do it, and to add credibility to the computation, is not. In terms of trees, there are many papers at NeurIPS each year that are *only* focused on decision trees! Trees are some of the most popular algorithms for interpretable machine learning, since the 1960’s. The fact that we have a substantially better way to define and compute SEV for trees is important for a lot of applications! Previous studies have also taken tree-based models for separate discussion. For instance, there exists a lot of papers discussing about fast shap values computation in tree-based models, also known as treeSHAP.
While it’s tempting to believe ideas are incremental in retrospect, they are often not obvious until after seeing them. Many of our ideas are not obvious at all, for instance looping through all good sparse decision trees to find one that optimizes SEV + accuracy, which makes what would have been a practically impossible computation now take seconds. Many important ideas in ML seem obvious in retrospect, e.g., placing skip connections in neural networks. If you glance through the orals in NeurIPS from last year, they are almost all variations on existing topics. Our paper is only the second paper on SEV, which is a new notion of sparsity, and it generalizes the definition and works out how to optimize it and make its computations credible.
We have actually made a lot of improvements and generalizations with respect to the original SEV paper after generalizing the framework of SEV to make it support instance-wise reference selection.
- We proposed cluster-based SEV and its variants in order to handle the two objectives of instance-wise selection: the flexible reference solves the issue of higher SEVs (cluster SEVs are larger than standard SEVs), and the credibility approach enables the explanations to be located in a high density region. The credibility setup is definitely not obvious.
- For tree-based models, we introduced SEV-T which directly uses negative leaf nodes as the reference points, and introduced a fast calculation procedure for SEV-. That fast procedure is definitely not obvious and is comparable to content within an algorithms course. The bound on credibility in SEV- in Theorem 4.3 isn’t obvious either - tree-based SEV- computations are always sufficiently credible.
- We generalized the original gradient-based methods and introduced new search-based methods specifically for tree-based models in order to optimize the specific model type to have low SEV. The methods for optimizing SEV are not obvious, particularly Sec 5.2 where we optimize SEV by scanning all good trees using the new Rashomon set algorithms and finding the one with the best SEV. (In contrast, think about how many papers optimize some combination of $ell_1$ loss with accuracy, or optimize neural networks using some variation of gradient descent!)
> The superiority of sparsity with respect to distance in terms of acceptability for humans is intuitive but not always guaranteed. When dealing with recourse, for instance, slightly modifying two features could be less expensive than modifying a single one by a larger value.
Remember that SEV is not recourse, it is a measure of sparsity only. That said, it’s possible to change the metric to a recourse metric if desired. We did use multiple distance metrics. In our experiments we also used the $L_\infty$ metric to evaluate the cost of change, which would handle the case you mentioned - when we care about how much we change the features. Table 6 in Appendix G provides the mean $L_\infty$ for each explanation. Based on Table 6, SEV-C achieves better sparsity, relatively low $L_\infty$ change (compared to DiCE under the same sparsity level), and meaningfulness (high median log likelihood) *simultaneously,* while for the other methods except DiCE, they use almost all features in their explanations, which is not only too complex to interpret, but also is not actionable in practice, and would yield worse recourse scores. This is not the case of modifying two features instead of one: it is modifying all instead of one. So already we would be better in terms of most recourse loss functions. It is possible to specialize the computation to a particular recourse loss, but that’s not our goal here, we’re concerned with sparsity of the explanation, not other costs. We will put a note for future work by others.
> How can you turn a DT leaf into a reference point when dealing with continuous features? What is the actual value of the continuous variables in the leaf?
Interestingly, the reference can be any point x within the leaf. So if the leaf is defined by $x_1>5$ and $x_3>0$, then any point with those conditions is a viable reference. For a query, the algorithm needs to flip its feature values to satisfy those of the leaf conditions to make an opposite prediction.
Since you can choose any point in the leaf as a reference value, you could choose the median/mean values of points in the leaf. That choice won’t influence the fast calculation of SEV-T, so the user can choose any actual value inside the leaf that they believe is most meaningful as the leaf’s reference.
Thank you so much once again for your review!
---
Rebuttal Comment 1.1:
Title: Thanks for your answer
Comment: Thanks for your detailed feedback, I appreciate the clarifications and your arguments in favour of the novelty of the contribution, I encourage you to better clarify these aspects in the manuscript. This said, I am happy to raise my score to a borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the insightful comments and willingness to raise your score! We will add those clarifications to the revised manuscript.
---
Rebuttal 2:
Comment: Dear reviewer brvf,
As the discussion period is approaching its conclusion in two days, we would like to kindly remind you that we have addressed your comments in our rebuttal. We would greatly appreciate any additional feedback you may have before the deadline. If you have any further questions or concerns, please do not hesitate to reach out, and we will do our utmost to respond promptly.
Thank you for your time and consideration.
Best regards,
The Authors of Submission 11304 | Summary: This paper proposes several ways to create closer, sparser, and more credible explanations for the SEV, along with two optimizing models. The results of the experiments on various datasets support the paper's claims.
Strengths: 1. Before reading this paper, I was unfamiliar with the sparse decision field. However, this paper is well-written and enjoyable to read.
2. I think this paper is quite creative by simultaneously considering closeness, sparsity, and credibility.
3. The comprehensive experiments address most of the claims they proposed in the introduction section.
Weaknesses: It could be better to provide the complexity analysis and the analysis of time expenditure for different variants of SEV. For example, the computational benefits of using tree-based SEV.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In Table 1, what do the gray numbers represent?
2. In line 199, there are duplicate "and"s.
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: I'm concerned about the depth of the tree if the model applies to large-scale data. In section A, the number of observations is up to 100K. If we encounter a much larger data set, the complexity of the model would exponentially increase with the depth of the tree. That's why I'm interested in time analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your review! We really appreciate it! See below for our response to your questions.
> In Table 1, what do the gray numbers represent?
Thank you for pointing out the missing explanation for the gray numbers. The gray numbers in Table 1 represent the query feature values that haven’t been changed, while the black numbers are the sparse explanations that have been made in order to make the prediction flipped. You can observe that the gray feature values are the same as the query feature values. In SEV, we can only use the changed features as the explanations, which is the same approach as DiCE.
> In line 199, there are duplicate "and"s.
Thanks for pointing out the typo! We have already corrected the typo in our paper.
> It could be better to provide the complexity analysis and the analysis of time expenditure for different variants of SEV. For example, the computational benefits of using tree-based SEV. I'm concerned about the depth of the tree if the model applies to large-scale data. In section A, the number of observations is up to 100K. If we encounter a much larger data set, the complexity of the model would exponentially increase with the depth of the tree. That's why I'm interested in time analysis.
Thanks for your question about the time complexity for tree-based SEV. The time complexity of tree-based SEV calculation is based on the structure of the tree instead of simply the depth of the tree. The worst case for an tree-based SEV calculation is $O(n)$, where n is the number of nodes in the decision tree. The worst case does not scale as the number of data points. Moreover, $\text{SEV}^T$ will be stopped early when the query has a sibling leaf node with negative prediction (shown in theorem 4.1), or the query already finds an explanation with $\text{SEV}^T$ = 1 during the search (see Line 28-29 in Algorithm 3). Therefore, the calculation of $\text{SEV}^T$ remains fast with the growth of observations.
Thank you so much once again for your review!
---
Rebuttal Comment 1.1:
Comment: Thanks for your answers! | null | null | Rebuttal 1:
Rebuttal: Thank you to all reviewers. The following pdf goes with the response for Reviewer xH4u.
Pdf: /pdf/5285496e6b221504bb4ad4cd8b6d9db68f5983bc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A distributional simplicity bias in the learning dynamics of transformers | Accept (poster) | Summary: The paper investigates whether transformers trained on natural language data using masked language modelling (MLM) exhibit a "simplicity bias" - learning increasingly complex interactions between input tokens over the course of training. The work provides both theoretical and empirical evidence for a simplicity bias in transformer models trained on text data.
The key contributions of the paper are listed below:
1) Introduction of a novel framework to study the learning dynamics of transformers:
- They develop a method to create "clones" of natural language datasets that capture token interactions up to a specified order.
- This is done by using transformers with factored attention layers and quadratic $x^2$ activation functions.
2) Demonstration of sequential learning in transformers: By using their cloning method on the TinyStories dataset, the authors show that a BERT-like transformer sequentially learns increasingly higher-order interactions between tokens. More specifically, lower-order interactions (e.g. 3-body) are learned earlier, while higher-order interactions (e.g. 7-body) continue to be learned later in the training.
3) Theoretical analysis: the authors provide analytical insights into the learning dynamics of multi-layer factored attention models, showing how different layers contribute to learning interactions of increasing complexity.
4) Evidence on language data: the authors demonstrate that interactions between words in natural text (specifically TinyStories) are highly many-body, up to at least 7th degree.
Strengths: Overall, I enjoyed reading the paper very much, and I also love the hypothesis investigated in this work. The specific aspects of this work that I love are listed below.
1) *The proposed hypothesis is of great significance*:
Beyond the significance of this simplicity bias mentioned in the paper, I want to provide the perspectives from the cognitive science community. A series of language evolution studies, e.g. [1], have shown that simplicity bias plays a key role in shaping the evolution of natural languages. For syntax (i.e. generative functions of language data), humans also have such simplicity bias. It would be highly interesting and exciting (at least for me) to see that Transformer-based language models also gradually learn increasingly complicated functions for generating language.
2) *The illustration is clear, and the structure is well-organised*:
The illustration of the method and the experiments is clear, and the structure of the paper is well-organised. Therefore, the whole idea is delivered very well.
3) *The methodology is convincing*:
The study methodology, clones of datasets and factored attention, is convincing per se, and can quite well align with the hypothesis investigated in the paper. Though there is room for this work to be improved, it could be a pivot step towards understanding the learning dynamics of Transformer-based language models.
References
[1] Smith et al. (2013). Linguistic structure is an evolutionary trade-off between simplicity and expressivity
Weaknesses: 1) *Uncommonly used architecture*.
As a first step towards understanding the learning dynamics of Transformer-based language models, I understand the model choice of the authors. However, this also limits the contributions and impact of this work. From my perspective, the "Transformers" in the title might better be replaced by "BERT-like Language Models", as the work investigates only a particular implementation of transformers on only language modelling. So, "learning dynamics of transformers" might be far bigger than the research scope of this work.
Moreover, for language modelling, decoder-only Transformers are wider applied these days. The $x^2$ activation function studied in this work is not common for language modelling either. These two factors further limit the contributions and impact of the work.
2) *Higher orders are not straightforward to sample.*
As mentioned by the authors in Section 5, the time complexity of the sampling algorithm for creating clones is not linear with respect to the number of samples. Thus, it would become increasingly harder to sample clones of higher-orders. This is a major bottleneck to apply the work at larger scales.
3) *The empirical evidence is not strong enough.*
To me, only one model on one specific dataset is not very convincing to empirically verify the proposed simplicity bias hypothesis. It would be much more convincing if the authors can show empirical evidence on more diverse language tasks, e.g. machine translation, and etc.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1) What does the "last layer" in line 163 mean?
It's unclear to me how is the "last layer" in line 163 defined. I kind of assume that it means the (bottom) layer that's closest to the inputs, a.k.a. the secondly activated layer in the model. The authors need to clarify the meaning of it to avoid unnecessaty confusion.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Please refer to my weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to thank you for your positive comments and your enthusiam for our work. We really appreciated the
series of language evolution studies on simplicity biases that you suggested,
and we decided to include them in the revised version of our manuscript.
Below we reply to your questions and concerns point-by-point; we hope that these points alleviate your concerns regarding the architecture, the time complexity of sampling, and the need for additional empirical
evidence. If so, we would appreciate it if you could reconsider your rating; if
not, we're looking forward to discussing further during the discussion period.
- *Title*: We agree with your suggestion for the title, which would be more
specific.
- *Activation function:* In our framework, we only use the $x^2$ activation
function in the architectures that are subsequently used to generate the
clones of a real NLP dataset, in which the maximum degree of interaction among
tokens can be controlled by tuning the number of layers. We use these clones
to demonstrate the simplicity bias of a standard transformer encoder
(BERT-like) with *standard activation function* trained under MLM on a real
NLP task. Thus, the main result of our paper is obtained for a standard
architecture, trained with a standard task on a standard dataset.
- *Decoder-only transformers* are indeed popular language models. The procedure
we present in this manuscript, and in particular the analytic derivations, are
however designed for BERT-style transformers, and validated only on such
models. Extending it to causal models and other NLP tasks is for sure possible
and interesting, but non-trivial, and we plan to investigate this topic in our future research.
- *Time complexity of sampling* The reviewer is right that the sampling of the
clones is not trivial; it is difficult to fully decorrelate the samples, and
the complexity of the method we use is not linear in the number of
samples. However, the modified transformer architectures that model the data
are computationally much more easy to handle than, say, estimating the
corresponding $n$-grams. The clone models require only a couple of layers of
factored self-attention, which is easier to compute than vanilla
self-attention layers, and they do not need fully-connected networks
in-between attention layers. This makes sampling of our clones, while
challenging, also way cheaper than sampling the distribution generated by an
ordinary transformer.
- *Evidence for simplicity bias* We appreciate the need for additional experimental verification,
and have repeated our experiments on another data set, WikiText101. As we show
in our plot in the additional one-side PDF, we can see sequential learning of
increasingly higher-order interactions also on this data set. In our
manuscript, we further show sequential learning in a controlled setting with
synthetic data (cf. Section 2) and we see the same behaviour in our analytical
treatment of Sec. 3.1. We hope that this combination of experimental
verification on tinystories, WikiText101, synthetic data, combined with our
analytical results, is convincing evidence for our claims. | Summary: This paper demonstrates that masked language models like BERT approximate distributions with increasing complexity in terms of the number of interactions, as tracked over the course of training. They do this by approximating lower order interactions with provably limited models.
Strengths: This paper has one contribution that I think is very valuable: it introduces a new family of neural architectures that have a guarantee on what order of feature interaction they are capable of representing.
Furthermore, the paper is one of a few that use an approach of approximating a true distribution using an analytically characterized language; the most common way this is deployed is by training on ngram-generated text to guarantee that it is representable by only ngrams. This is a promising approach that sits at the midpoint between purely synthetic and purely natural data, and more work in machine learning should embrace it.
This paper provides another example of how models learn increasingly complex representations over the course of training. It is a good addition to the literature, although it is not the first such paper in language modeling or NLP.
Weaknesses: *Post rebuttals: The authors have expressed a willingness to rewrite the misleading framing and expand their literature review, so I have raised my score from 4 to 6.*
The paper should be updated to reflect related literature in language modeling or NLP. Here are a few other examples:
- Language models start out by behaving like ngram models in their grammatical capabilities and eventually begin to act more like humans https://aclanthology.org/2022.acl-long.568.pdf
- there are well documented dependencies including the famous checkmate in one emergence example from https://arxiv.org/pdf/2206.04615
- RoBERTa achieves high performance on most linguistic benchmarks early in pre-training, but more complex tasks require longer pre-training time. https://aclanthology.org/2021.findings-emnlp.71/
- LSTMs start by learning representations that correspond to simple groupings like part of speech and eventually learn more complex language modeling features https://aclanthology.org/N19-1329/
- Section 4 explores increasingly complex ngram learning in https://arxiv.org/pdf/2402.04362
in context learning approximates shorter ngrams before longer ngram distributions https://arxiv.org/abs/2402.11004
The core issue with this paper is that at no point does it describe a simplicity bias. While conceptual dependencies and progressively complex learned distributions are clearly related to simplicity bias, these things are not the same. A simplicity bias requires one to show that the model fails to learn something difficult because it has learned something easier, but arguably what is described here might actually be a sequence of strict dependencies between interaction levels. In order to demonstrate a simplicity bias, the authors would have to show that the model fails to learn higher order interactions because of the availability of lower order interactions, but the lower order interactions are available regardless of whether the higher ordinary interactions are present so there are no conditions to contrast in this way.
Some ways of demonstrating simplicity bias are:
- demonstrating that introducing a simple rule prevents the model from learning a more complex rule https://arxiv.org/abs/2011.09468 https://arxiv.org/abs/2006.07710
- demonstrating that suppressing a simple rule allows the model to get better at learning a more complex rule https://arxiv.org/abs/2309.07311 (specifically is focused on MLMs as well)
- Correlating the accessibility of a rule and the degree to which the model ultimately captures a different rule https://arxiv.org/abs/2006.12433
- analytic proofs of a preference towards simplicity https://arxiv.org/abs/1805.08522
It is not sufficient to simply demonstrate that the model happens to learn simple things and then more complex things later on.
I want to be clear that this paper still has value, as it demonstrates yet another way in which these models learn increasingly complex distributions, which is a different but related thread of the literature. However, the framing is misleading and I do not believe it is ready for publication without a substantial rewrite.
Technical Quality: 2
Clarity: 2
Questions for Authors: Can you give a simple explanation of what a k-body interaction would look like? is it essentially stating something about ngram models, but not restricted to the most proximate tokens?
Can you guarantee that the difference provably limited models actually differ by nothing but their limitations? What about random variation?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: nothing obvious
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your careful reading of the manuscript, your
detailed review, and the many references. Below we reply point-by-point; we hope that our replies alleviate your concerns. If so, we would appreciate it if you could revisit your rating; if not, we look forward to discussing further during the discussion period.
- *Related works* We thank the referee for the references. Indeed, the
connection with n-gram models is extremely important, since our clones
implement n-grams using a transformer architecture. We will explicitly mention
this in the revised version. On the other hand, we are aware that the NLP
community has made enormous steps forward in understanding learning dynamics,
with paradigms that go well beyond n-grams. In the revised version of the
manuscript we will explicitly mention ref
https://aclanthology.org/2022.acl-long.568.pdf and
https://aclanthology.org/2022.acl-long.568.pdf as examples of advanced
quantitative analysis in this direction. We will also mention
https://aclanthology.org/N19-1329/ as a technique which allow probing the
learning dynamics in language. Finally, we discuss the relation of our work to
the very recent paper https://arxiv.org/pdf/2402.04362. They show that deep
transformers first learn the token unigram and then the bigram
distribution. However, estimating the higher-order interactions with the approach described in this work is prohibitively expensive on large corpora with many tokens due to the curse of
dimensionality. Moreover, we would like to highlight that none of these papers,
including arXiv:2402.04362, introduces an architecture which produces
distributions in which the simplicity bias can be _controlled_ by tuning
the interactions between tokens that are captured by the model, at an arbitrary order. This
analytically guaranteed control, as also recognized by the reviewer, is an
important and novel contribution of our work.
- *The nature of simplicity bias* Thank you for sharing these references on
various ways in which learning simple rules might interfere with learning more
complex rules. These papers demonstrate "simplicity biases" by performing
experiments on appropriately fine-tuned training tasks, where simple rules
based on spurious correlations hinder the network from learning the
generalisable input features. These behaviours are interesting and important, but have to be induced by an external intervention (a fine-tuned task). If trained by a standard task (say text completion), transformers seem instead to pick up the most relevant
underlying rules of language rather well, producing flawless chunks of
text. It is this process that we want to study through the lens of another
line of research. We hope that our paper convincingly shows that during a single training run, a model often
seems to be learning increasingly complex functions or distributions that
explain its training data. We discuss previous theoretical and experimental
work in the introduction, in particular in lines 18-26, many of which have
been published at NeurIPS/ICML/ICLR in the last couple of years (Refs
2,3,4,5,6,10,12,13). We will mention this separate line of work on simplicity
biases in the revised version of the manuscript.
Regarding your *questions*:
> Can you give a simple explanation of what a k-body interaction would look
> like? is it essentially stating something about ngram models, but not
> restricted to the most proximate tokens?
A $k$-body interaction is any interaction between $k$ tokens anywhere along the
sequence that can not be written as a product of lower order interactions. In
the energy-based models we use to construct the clones, each such interaction
corresponds to one term in the energy function involving $k$ tokens. Indeed, the tokens do not have to be proximate. N-gram
models instead simply define the number of previous tokens on which a given
token is conditionally dependent, i.e. how many conditions there are in $p(x_n |
x_{n-1}, x_{n-2}, …)$.
> Can you guarantee that the difference provably limited models actually differ
> by nothing but their limitations? What about random variation?
The architecture of the "limited models" that we use to obtain the clones only
differ by their "limitations", namely by the degree of the interactions between
tokens that they cover. After training, the values of the parameters of these
models could have some random variations in their values due to different
initial conditions; however, this will not change the degree of the interactions
between tokens represented by these models.
---
Rebuttal Comment 1.1:
Title: Response
Comment: > The nature of simplicity bias
I do not feel that this response addresses my criticism, which is that none of the results justify framing the paper as studying simplicity bias. Although training dynamics of increasing complexity are clearly related to simplicity bias, they are not evidence of simplicity bias and therefore the paper should not be framed around describing them as such. This criticism is fairly mild, but I consider it to be crucial: I believe that the title as written is making a claim that is unrelated to the results and not supported by evidence in the paper. I don't think that this paper should be published with a misleading title or framing, although the results themselves are interesting. This issue is easily fixed; I would easily raise my score for a version of the paper that had a different title and discussed simplicity bias as related work rather than as a description of the results here.
> The architecture of the "limited models" that we use to obtain the clones only differ by their "limitations", namely by the degree of the interactions between tokens that they cover. After training, the values of the parameters of these models could have some random variations in their values due to different initial conditions; however, this will not change the degree of the interactions between tokens represented by these models.
Is it empirically true that models trained under different initial conditions with the same set of limitations exhibit all of the same behavior that you measure?
---
Reply to Comment 1.1.1:
Title: Reply
Comment: Thank you for engaging with our rebuttal so quickly, we appreciate it.
> Re: framing
We now realise that the term simplicity bias can be misleading; since we strive for maximum clarity of our paper, we are considering changing the framing of the paper by changing its title to “A bias towards low-order interactions in the early learning dynamics of BERT-like language models”, and generally changing “simplicity bias” to the more descriptive “bias towards low-order interactions” throughout the text. Would that address your criticism?
> Is it empirically true that models trained under different initial conditions with the same set of limitations exhibit all of the same behavior that you measure?
Yes; in the course of our work, we found that if we generate clones from networks with the same interaction-limited architecture trained from different initial conditions, the performance of a given transformer tested on the different clones will be the same to within the variation in performance of the same transformer trained from different initial conditions. | Summary: The paper investigates the simplicity bias in BERT-style Transformers trained with MLM. The study reveals that these models initially learn simpler interactions between tokens and gradually learn higher-order interactions. This finding aligns with simplicity biases observed in many neural network architectures. The authors develop a method to create dataset "clones" that capture token interactions up to specified orders -- which they called many-body interactions. This allows detailed analysis of the learning dynamics in Transformers.
Strengths: **Novel Insight**. The paper provides novel insights into the learning dynamics of Transformers, highlighting a simplicity bias that was previously unexplored in the context of self-supervised learning with MLM. At the same time, most previous simplicity bias papers more focus on the bias at the **beginning** or **end** of training, and this papers provides this novel insight in studying the training dynamics and the arising simplicity bias explanation during training procedures.
**Many-body interactions**. The introduction of a procedure to generate dataset clones that capture interactions up to specified orders is a significant methodological contribution, facilitating deeper analysis of model learning behaviors. The proposed method enables us to have a deeper understanding of token interaction and itself is of a high contribution.
Weaknesses: **Computational overhead**. The method to create dataset clones and analyze the learning dynamics is **computationally intensive**. It will potentially limit its applicability to larger datasets or more complex models. I would like to hear the authors proposals on how to deal with this issue.
**Generality** While the study focuses on BERT-style Transformers and the TinyStories dataset, the generalizability of the findings to other Transformer architectures or more diverse NLP tasks remains to be demonstrated. One thing to note, that Transformers are also applied to vision tasks, speech tasks, etc. Would like to hear how this technique can generalize to studying transformers on non-language tasks and if the conclusion would be similar. Similarly, how this would extend to decoder-only models such as GPT, remains unknown.
Technical Quality: 3
Clarity: 3
Questions for Authors: It's a very clearly written and presented paper, I don't have further questions to ask.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your detailed questions and your feedback, which led us to conduct additional experiments. Below we reply point-by-point; we hope that our replies alleviate your concerns. If so, we would appreciate if you revisit your rating; if not, we look forward to discussing further during the discussion period.
- *Computational cost:* applying our method to train a many-body
approximation of a data set, or "clone", is _less_ expensive than training a
standard BERT-like model, since the architecture is much simplified. Likewise,
the sampling procedure that we use from Goyal et al. [ICLR '22,
arXiv:2106.02736] can sample the standard BERT models. Hence we are confident
that the method can be applied to any data set on which BERT was applied to.
- *Generality:* The procedure, as presented in the manuscript, and in particular
the analytic derivations, are designed for BERT-style transformers, and
validated only on such models. Extending it to causal models and other NLP
tasks is for sure possible and interesting, but non-trivial, as the probability structure induced by causal training is analytically (and empirically) different. We plan to investigate these
topics in our future research. To further support the empirical
evidence of sequential learning in BERT-like architectures trained with MLM on
NLP datasets, we replicated the analysis on the WikiText-2 dataset, observing
a similar hierarchy in the learning process, and with a vanilla transformer on the synthetic data set we created (see plots in the attached one-page PDF). In both cases, we see clear sequential learning by the transformers. | Summary: This paper shows that transformers sequentially learn high-order interactions between input tokens during the training process, which echoes the simplicity bias prevalent in neural networks. Specifically, the paper proposes a method to extract certain orders of interactions from the original training set, and construct a new dataset that contains purely low-order interactions. Then, the new dataset is used to evaluate how well a transformer learns certain orders of interactions.
Strengths: The paper attempts to validate the simplicity bias on neural networks trained on masked language modelling, which has not been extensively studied.
Weaknesses: 1. The main claim of the paper “sequential learning increasingly higher-order interactions by BERT” is not well supported by the results in Figure 4. The term “sequential learning” implies that (i) at the early stage of training, the model first learns low-order interactions *without learning high-order interactions*, (ii) at the later stage of training, the model learns high-order interactions. However, although Figure 4 can support the above point (ii), but it cannot support point (i). The reason is as follows: at epoch 3, the testing loss on 7-body clones is already significantly lower than the testing loss on 3-body clones, which means that the model already learns part of the 7-order interactions at the early training stage. This further indicates that the model learns 3-order, 5-order, and 7-order interactions at the same time in the early stage of training.
2. Figure 4 is only tested on one instance, thus lacking statistical significance. I encourage the authors show the test loss on more individual instances as well as the average test loss across all instances to consolidate this conclusion.
3. The paper is hard to follow due to inconsistent and undefined notations.
(1) I suggest adding a preliminary section with a formal definition of what “factored attention” means. There should be at least an equation showing how factored attention operates given a sequence of input tokens, what parameters it contains, etc.
(2) In Line 93, $\mu$ and $\alpha$ are not defined. I guess $\mu$ means the $\mu$-th sample in the training set, but I’m not sure what $\alpha$ refers to.
(3) According to Line 84 “each token $s_i$ is an element taking values in a discrete vocabulary $s_i\in \\{1,...,|\mathbb{V}|\\}$”, the equation $s_{i \alpha}^\mu=1$ in Line 93 means that we are always measuring the probability of generating the first token in the vocabulary. This seems unreasonable.
(4) In Equation (2), $A$ and $V$ are not defined.
(5) The meaning of the subscript $\mu$ in Equation (3) seems to conflict with the meaning of the superscript $\mu$ in Line 93.
(6) In Line 134, $p$ is not defined. Moreover, it is not clear whether the superscript $p$ is an exponent, or simply an index.
4. Figure 1 is misleading. The input sentence is shown as a folded tape, which highly resembles the structure of amino acid. However, this paper has no connection with amino acid, nor does it discuss about the spatial position of each token. Thus, I think this design is quite redundant. Why not simply write the input sentence on a horizontal line?
Technical Quality: 2
Clarity: 2
Questions for Authors: Figure 2(c) shows the mean squared displacement (MSD) of the weight for a 3-layer transformer with *factored attention* during training. It shows that the weight of the last layer first begins to change, then the second layer, and at last the first layer. I wonder if this phenomenon can also be observed on a transformer with *standard attention*.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are properly stated in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for their detailed questions and their feedback. Below we
reply point-by-point; we hope that our replies alleviate your concerns. If so, we would appreciate if you revisit your rating; if not, we look forward to discussing further during the discussion period.
> Support for our main claim in Figures 1 and 4:
We're glad the reviewer agrees that our results show that transformers learn
higher-order interactions between tokens in the later stages of training, point
(ii). However, we think our results support also point (i) both in experiments
on real data and in the controlled setting with synthetic data. The left panel of
Figure 1 shows that the test loss curves for the different clones overlap during
the initial stage of training. Specifically, even after the initial plateau,
between 5000 and 6000 training steps, the loss curves on the various clones
continue to overlap despite the quick improvement in test error. The crossover
between the two-body and the higher-order interactions is at step 15000, and
only at 22000 the models start differing significantly. These results show that,
at the early stage of training, the model first learns low-order interactions
_without learning high-order interactions_. Furthermore, point (i) is clearly
supported by the experiments on the synthetic dataset (Section 2) and from the
analytical computation (Section 3.1).
> I encourage the authors show the test loss on more individual
> instances as well as the average test loss across all instances to consolidate
> this conclusion.
We have performed our experiments using various random seeds, as specified in
the figure captions, and found the results to be consistent across different
seeds. We have updated the plots by adding the corresponding error bars; we show
an example in the rebuttal figure 1, where we redrew Fig 1 with error bars. We
will update all figures in this way in the revised version.
> Clarifying the nature of “factored attention”
We define factored attention, where the attention matrix depends only on the
position of the tokens, by writing the corresponding probability for the masked
token $p_{\mathrm{mlm}}$ in Eq (2). To clarify this we will first define only
the factored self-attention layer, and then the equation for
$p_{\mathrm{mlm}}$. The trainable parameters in a factored self-attention layer
are the input-independent attention weights $A$ (an $L\times L$ matrix, with $L$
the length of the input sequence) and the value matrix $V$.
> (2) In Line 93, $\mu$ and $\alpha$ are not defined. I guess $\mu$ means the
> $\mu$-th sample in the training set, but I’m not sure what $\alpha$ refers to.
You are right: $\mu$ is an index that runs over training examples; $\alpha$
indexes positions along the sequence. We will clarify this in the revised
manuscript.
> (3) According to Line 84 “each token is an element taking values in a discrete
> vocabulary ”, the equation in Line 93 means that we are always measuring the
> probability of generating the first token in the vocabulary. This seems
> unreasonable.
We wrote the equation in line 93 for the first token to keep the equation
readable; this equation (and any other) naturally extends to predicting any
other token.
> (4) In Equation (2), $A$ and $V$ are not defined.
Thank you for pointing this out -- $A$ and $V$ are the attention and the value
matrix, respectively. We will add a comment.
> (5) The meaning of the subscript $\mu$ in Equation (3) seems to conflict with the
> meaning of the superscript in Line 93.
In both equations, $\mu$ is simply a "dummy" index that is summed over.
> (6) In Line 134, $p$ is not defined. Moreover, it is not clear whether the
> superscript is an exponent, or simply an index.
In this equation, $p$ is also a dummy index. We now see that this might clash
with the notation for probabilities, and use a different dummy index for this
sum.
> Regarding the design of Figure 1
We have chosen the folded tape shape because it allows highlighting groups of words which are far away, but (qualitatively) form a group of interactors. Importantly, the factored attention architecture learns exactly the strength of the many-body interactions along the sequence. This is the message that we would like to convey with the figure. These days we attempted to follow the reviewer’s suggestion and write the sentence on a horizontal line, but we did not manage to find a graphically satisfactory alternative.
> Repeating the experiments on synthetic data with a
> transformer with standard attention.
This is a great question, and inspired us to perform an additional analysis. We
trained a standard three-layer transformer encoder on clones of a synthetic
datasets characterized by interactions involving up to four bodies. Being a
standard transformer, this architecture also had several layers of Layer
Normalization, which mitigate the occurrence of plateaus even on synthetic
datasets. However, the plot for this standard transformer in Fig
1 of the rebuttal figures clearly shows sequential learning also in this
case.
---
Rebuttal Comment 1.1:
Comment: Thank you for your point-by-point response. I appreciate the new experiment on the transformer with standard attention. Regarding the notations, I encourage the authors to clarify all the mentioned issues in the future manuscript, especially replacing the dummy index $\mu$ in Eq. (3) and $p$ in Line 134 with other appropriate symbols.
I have further questions regarding Fig. 4 and Fig. 1. The authors respond to my concern about the insufficient empirical support of Fig.4 for the main claim by referring to Fig.1. However, I find that the results in Fig.1 and Fig.4 seem to conflict with each other. In Fig.1, the test losses on 3-body, 5-body, and 7-body clones are almost the same in the early stage of training (before Step 5000), while in Fig.4, the test losses on 3-body, 5-body, and 7-body clones significantly differ in the early stage of training (epoch 3). Could you provide a further explanation?
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with our rebuttal, we appreciate your effort.
> Regarding the notation
Thanks again for spotting these typos; we have already changed the indices in those sums for clarity.
> Further questions regarding Fig. 4 and Fig. 1
Thank you for following up on this.
- We drew Fig 4 using the same data as Fig 1. In this experiment, 1 epoch = 3000 steps of SGD. Fig 1 shows that the test losses of the transformer on the different clones are different after 9k steps (=3 epochs). Hence the blue dots in Fig 4, which represent the performance of the transformer on the different clones (from left to right) after three epochs, have different values.
- Sequential learning in sense (ii) of your definition (learning higher-order interactions at a later stage of training) is borne out by Fig 4 since the performance on the three-body clone does not improve at later epochs (left-most column), while the performance on the five-body clones continues to improve until ~epoch 10, and improves on the seven-body clone for even longer.
- Sequential learning in sense (i) of your definition (learning low-order interactions without learning high-order interactions) can be seen in Fig 1 during the first two epochs, where the loss curves on the various clones continue to overlap despite the quick improvement in test error.
Does this answer your questions? We will add epochs to the $x$-axis of Fig 1 for increased readability, and explicitly state the number of steps per epoch for the experiments. Please let us know if there are other changes that could help the reader, or if you have any other questions. | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to review of our manuscript. In our paper, we show that BERT-style transformers trained using masked language modeling learn increasingly complex interactions among input tokens. To conduct this analysis, we develop a procedure to generate clones of a given natural language data set, which rigorously capture the interactions between tokens up to a specified order.
Prompted by the thoughtful questions in the replies, we conducted a number of additional experiments to solidify our points:
- On the left of Fig 1 of the attached PDF, we show sequential learning in the same setting as in Figure 1 of the main submission, but on a different data set: WikiText.
- On the right of the same figure, we show sequential learning of a vanilla transformer with standard attention trained on our synthetic data set.
- We have also added an additional figure showing the statistical uncertainty in the test loss trajectories for Figure 1.
We hope that these results alleviate your concerns. If not, we look forward to discussing further during the discussion period.
Pdf: /pdf/aee74e704d3a0518a88ef68783f395d83ba8dea6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Continuous Heatmap Regression for Pose Estimation via Implicit Neural Representation | Accept (poster) | Summary: This paper introduces a new heatmap representation for 2D human pose estimation. Prior approaches use a quantized representation of heatmaps, where a confidence score is assigned to each pixel. In addition to being dependent on the image resolution, this representation does not match the usual ground truth coordinates, which are continuous when working in image crops.
This paper proposes an implicit neural representation to address this limitation of prior works. Specifically, given 2D coordinates (which do not necessarily correspond to the center of a pixel), the introduced neural network outputs a confidence score for each keypoint. At inference, the coordinates of each keypoint are recovered using the maximum confidence value. The main contributions of this paper are the following: (1) The authors introduce NerPE, a novel implicit representation of heatmaps for continuous human pose estimation; (2) A progressive coordinate decoding method is introduced to recover 2D coordinates from continuous heatmaps.
Strengths: - The approach is novel; to my knowledge, no prior work has used implicit representations for heatmaps.
- The overall writing is good, and the paper reads nicely. The weakness of prior works addressed in this paper is clearly identified, and the proposed approach is well-justified.
- The proposed approach yields consistent improvement over the SOTA methods, and the ablation study gives a good idea of the impact of choice designs.
Weaknesses: - The Related Work section lacks discussions and comparisons with other approaches attempting to address the quantization error in heatmaps, such as [42,a,b,c,d,e,f,g]. The authors say that "As far as we know, existing heatmap-based methods all belong to discrete heatmap regression" (L104), but for instance, [b] predicts a float location using an offset within the original heatmap. Similar approaches using PixelShuffle operation were proposed [f]. In general, the introduction presents the problem of heatmap quantization as if prior works ignored it, but plenty of works attempted to address this problem.
- I am unsure if having a continuous representation of heatmaps is useful. The ground truth is given in pixel coordinates on the full image. I agree that increasing the heatmap resolution until we reach the original image resolution is helpful, but is it useful to have a higher resolution than the original image? In the end, the ground truth is given in full pixel coordinates.
- Some information important for reproducibility is missing. For instance, what are the training and validation datasets and the number of epochs? Even if the authors say they used "standard settings" I believe this is not enough to reproduce the results.
- No information is provided regarding the material used for training and testing or the running times. This is particularly concerning as the method has a high computational cost. The authors propose an alternative decoding strategy that seems much less expensive, but comparing it with other methods and heatmap representations would be necessary.
[a] Bulat, A., Sanchez, E., & Tzimiropoulos, G. (2021). Subpixel heatmap regression for facial landmark localization. BMVC
[b] Lan, X., Hu, Q., & Cheng, J. (2021). Revisiting quantization error in face alignment. ICCV
[c] Yu, B., & Tao, D. (2021). Heatmap regression via randomized rounding. PAMI
[d] Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., & Murphy, K. (2017). Towards accurate multi-person pose estimation in the wild. CVPR
[e] Luvizon, D. C., Picard, D., & Tabia, H. (2018). 2d/3d pose estimation and action recognition using multitask deep learning. CVPR
[f] Wang, H., Liu, J., Tang, J., & Wu, G. (2023). Lightweight Super-Resolution Head for Human Pose Estimation. MM
[g] Li, J., Bian, S., Zeng, A., Wang, C., Pang, B., Liu, W., & Lu, C. (2021). Human pose regression with residual log-likelihood estimation. ICCV
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) "The meaning of the confidence scores output by current heatmap-based models has not been theoretically proven." (L43). What does this really mean? Is the NerPE representation "theoretically proven"?
(2) In Equation (5), to my understanding, $f_W(i,j)$ and $f_b(i,j)$ are floats, and $z^*$ would be of dimension $C$? But isn't $H_{z^*}$ supposed to be a float since it is a confidence value?
(3) "We are surprised to find that the conclusion drawn is similar to the local implicit image function in [8] except for the additional normalization step." (L179). Why is that surprising? Isn't the process a bit similar?
(4) Could we consider that the proposed method consists of doing sub-heatmaps until we reach a desired degree of precision?
(5) The model is trained to output Gaussian or Laplacian heatmaps. However, at inference, there is no guarantee that the confidence distribution follows the same distribution. I think that this is a good point, as this brings flexibility. However, couldn't that raise issues for finding the argmax? For instance, the model could potentially "hesitate" between 2 different places when placing the center of the heatmap (the predicted distribution would look like a mixture of Gaussians, for instance). Did you observe such cases in practice, and could other strategies be explored instead of choosing the argmax?
Side remark: Typo L15 "desgin"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of this work are briefly discussed in Section 5:
- The limited experiments on network architecture and index embedding are not a problem, as this work aimed to focus on the heatmap representation.
- The discussion on the necessity of predicting discrete heatmaps for computing an argmax is interesting, but it would have been more valuable if the authors had provided potential improvements and strategies on that point.
The potential negative societal impacts are not discussed but are rather limited. The authors could consider mentioning the potential misuse of pose estimation for intrusive surveillance or military applications. The environmental cost could also be discussed, especially since this method is more computationally expensive than prior works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. Below we address all the concerns.
***W1: The Related Work section lacks discussions and comparisons with other approaches attempting to address the quantization error ...***
The quantization error problem caused by heatmap discretization is a well-known fact. In the Introduction, what we want to express is that the existing works are committed to overcoming quantization errors (as a result), while we hope to solve heatmap discretization (as a cause). Although some methods achieve sub-pixel positioning accuracy with the help of techniques such as offset prediction, their performance still depends on the quality of heatmap regression. We do not deny the achievements of current works on the quantization error problem, but provide a new perspective to solve it inspired by implicit neural representations. We will correct the text description to avoid this misunderstanding in the Introduction and distinguish our method from the works mentioned by the reviewer in Related Work.
***W2: I am unsure if having a continuous representation of heatmaps is useful. The ground truth is given in pixel ...***
In the top-down paradigm, the image patches containing the person instances in the original image are scaled and rotated before being fed into the pose estimation network. As a result, the original pixel-level annotation becomes the floating-point ground truth after affine transformation. Therefore, in coordinate decoding, even if the heatmap resolution is equal to the input resolution, the coordinates calculated by argmax are still lossy.
***W3: Some information important for reproducibility is missing. For instance, what are the training and validation datasets ...***
COCO, MPII, and CrowdPose are standard human pose estimation datasets with fixed settings in terms of train\val division, data augmentation, etc. The random rotation and scaling factors are [[-45°, 45°], [0.65, 1.35]] in COCO, [[-30°, 30°], [0.75, 1.25]] in MPII, and [[-45°, 45°], [0.65, 1.35]] in CrowdPose. "standard settings" means that the training process of NerPE is consistent with the corresponding baseline, where the learning rate is initialized to 0.001 and decayed twice by a factor of 0.1. The learning schedules with SimpleBaseline, HRNet, and TokenPose as the backbone are [90th, 120th, 140th] epoch, [170th, 200th, 210th] epoch, and [200th, 260th, 300th] epoch.
***W4: No information is provided regarding the material used for training and testing or the running times ...***
We provide a comprehensive analysis of the efficiency and performance in **Tab. A3 of the attached PDF**, where we supplement GFLOPs and inference time to better compare NerPE with existing methods. As shown in Tab. A3, our progressive coordinate decoding effectively solves the shortcoming of implicit neural representations that generate a lot of computation when outputting high-resolution signals.
***Q1: "The meaning of the confidence scores output by current heatmap-based models has not been theoretically proven." ...***
Existing methods regress heatmaps in the form of 2D pixel arrays. To generate discrete ground truth heatmaps, many influential works (such as HRNet, ViTPose) place a fixed Gaussian kernel on the grid points to which body joints belong. Therefore, the ground truth heatmaps have a series of fixed discrete values while the heatmaps predicted by the neural network have continuous values, which are not equivalent. In contrast, our generated ground truth heatmaps are continuous, so there is no such problem.
***Q2: In Equation (5), to my understanding,*** $f_{W}(i,j)$ ***and*** $f_{b}(i,j)$ ***are floats, and*** $z^{*}$ ***would be of dimension*** $C$ ***? But ...***
In Eq. 5, $f_{W}(i,j)$ and $f_{b}(i,j)$ are rewritten from $W_{i,j}$ and $b_{i,j}$, where the mapping $f$ represents the selection in the weight libraries $W$ and $b$ according to an index, i.e., $f:(i,j)\mapsto W_{i,j},b_{i,j}$. The local feature vector $z^* \in \mathbb{R}^{C \times 1 \times 1}$ is a element in the image features $Z \in \mathbb{R}^{C \times H_{Z} \times W_{Z}}$, and $H_{z^*}$ denotes the cell in the heatmaps $H$ corresponding to $z^*$. So, $H_{z^*}$ is a float tensor instead of a float.
***Q3: "We are surprised to find that the conclusion ..." (L179). Why is that surprising? Isn't the process a bit similar?***
In [Ref1], the local version of implicit neural representation is proposed based on intuition, and at least the persuasive derivation process is not provided in its paper. In our work, we derive a similar conclusion from sub-pixel convolution, which is exciting because we find a reasonable explanation for the local implicit neural representation.
***Q4: Could we consider that the proposed method consists of doing sub-heatmaps until we reach a desired degree of precision ?***
Yes. Our method has two ways to reach the desired degree of precision. The standard NerPE directly outputs complete predicted heatmaps at the resolution corresponding to the required precision. The variant NerPE-p achieves equivalent resolution through coordinate decoding to iteratively refine the local parts of heatmaps, as the reviewer said.
***Q5: The model is trained to output Gaussian or Laplacian heatmaps. However, at inference, there is no guarantee that ...***
As the reviewer said, in existing heatmap regression research, the distribution of predicted heatmaps is generally not guaranteed to conform to the Gaussian or Laplace distribution. The reason is that the loss function is the average of the differences of each heatmap pixel, so our method is not immune even if our heatmap representation is continuous. To solve this problem, it may be a simple and effective solution to add predictions of mean and variance to assist argmax.
[Ref1] Learning continuous image representation with local implicit image function. CVPR, 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot to the authors for this rebuttal, which addressed all my questions and concerns. I still think that this paper should be accepted.
I think it is important to add the information from Table A3 in the final version; I expected the running times to be larger given the inference process.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive feedback. We are glad that the reviewer thinks our work is worthy of acceptance.
> I think it is important to add the information from Table A3 in the final version; I expected the running times to be larger given the inference process.
We will report the analysis of efficiency in our paper. There are two reasons why the inference time is faster than expected: (1) In both NerPE and NerPE-p, the bakcbone network is executed only once in the forward pass. (2) The INR-related part is implemented by a simple MLP, and queries for multiple positions are performed in parallel on the GPU. Therefore, the inference time of our method is close to that of our baseline. | Summary: This paper proposes a new approach to predict continuous heatmap for keypoint localization. The proposed method adopts a MLP that receives position coordinates and corresponding feature and output the confidence score of this position. Then this method can query candidate points and select the point with maximum score as final results. Experiments on several keypoint localization benchmarks demonstrate the effectiveness of the proposed method.
Strengths: 1. The idea of introducing implicit neural representation to perform keypoint localization is new and can addresses the quantization error of existing heatmap-based method.
2. This paper is well-written and easy to understand.
Weaknesses: My major concern is about the decoding process of the proposed method. It seems that this method should evaluate a lots of points to select the best results, which will introduce much inference computational cost. The introduced progressive coordinate decoding can reduce the computational cost but can avoidably increase the decoding time due to multi-round decoding process. This paper should give a detailed efficiency comparison of existed methods such as heatmap (SimpleBaseline, HRNet, SimCC, ViTPose), regression (RLE), including GFLOPs, inference time (ms), GPU memory consumption under the same setting (all above methods release their codes, so it is not impossible to conduct such comparisons). These comparisons are necessary because it seems that the advantages of the proposed method is not obvious.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper discusses its limitation in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful review. Below we address all the concerns.
***Response to weaknesses:***
***Efficiency:*** We compare the efficiency and performance of NerPE and existing methods as shown in **Tab. A3 of the attached PDF**. Although implicit neural representations help us realize continuous representation, it has the disadvantage of a large amount of computation when generating high-resolution signals, just like its application in other computer vision fields. In Tab. A3, NerPE-p significantly reduces the GFLOPs to an acceptable level (from 73.6 to 9.5) by our progressive coordinate decoding that is designed according to the characteristics of human pose estimation.
***Performance:*** NerPE has a positive impact on the performance of the baseline and can be generalized to most heatmap-based methods with minor structural changes. We need to be clear that the quantization error problem worsens as the input resolution decreases. The introduction of implicit neural representations in NerPE significantly improves the performance of the baseline on low-resolution images (e.g., 64×64), as shown in Tabs. 1 & 4. In Tab. 3, the same conclusion is also illustrated by the large lead of NerPE in PCKh\@0.1. | Summary: The authors tackle 2D human pose estimation through a continuous heatmap representation. Specifically, instead of representing the heatmap as a grid of values, they use an implicit neural representation, where the coarse feature vector along with queried coordinates are fed to an MLP, which predicts the heatmap value for that requested position. This can in theory provide infinite resolution for the heatmap, allowing more accurate localization of its peak, compared to the usual discretized heatmaps that can lead to quantization errors due to the low resolution. The model is trained with randomized query points with Gaussian target. For test time, the authors propose a progressive refinement scheme where the highest-value area in the heatmap is successively queried at finer levels.
Experiments are performed on COCO, MPII and CrowdPose with improvements over baselines, especially for evaluations with stricter thresholds.
Strengths: This is a creative and well-motivated approach. Whereas prior works often used heuristics such as moving the output point towards the second-highest heatmap pixel by a factor of 0.25 etc., this work is a more principled solution to producing arbitrary-resolution heatmaps.
The experiments are extensive, with three different backbones and three different datasets. The most convincing of these is the MPII experiment with the strict threshold, showing a clear improvement over comparable baselines. This is exactly the setting where one would expect to see the improvement. The other settings and dataset also see some quantitative improvement, which substantiates the claims of the paper.
There is little overhead in terms of parameter count and computational cost, and the technique can be applied to a wide range of approaches giving the potential for wide impact.
The writing has high quality and the structure is clear.
Weaknesses: Some important alternative techniques from the literature are not discussed, and this impacts the conclusions.
First, soft-argmax (integral regression) [1] was introduced specifically to enable continuous output that is not limited by the coarse downsampled grid. This would be an important baseline decoding method to use here. (There have been several extensions, including [2])
Second, offset regression such as the short-range offsets learned in PersonLab [3] would be another simple formulation allowing continuous output not limited by the grid.
The statement around L305 is not correct, the radius of the Gaussian does not have to be an integer for the gridded heatmap. There is no connection between these two, it is neither easier nor more appropriate to use integer-valued standard deviations for a Gaussian on a grid (alternatively, clarification is needed).
[1] Sun et al., Integral Human Pose Regression, ECCV 2018
[2] Gu et al., Bias-Compensated Integral Regression for Human Pose Estimation, PAMI 2023
[3] Papandreou et al. PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model, ECCV 2018
Technical Quality: 3
Clarity: 4
Questions for Authors: Why is the proposed method not tested also on resolution 384x288? This would allow better comparability in Table 2. Further, why is the gap performance advantage significantly smaller on COCO test-dev compared to COCO val?
Were other alternatives considered for the implicit function, such as learning a distance field?
As for the progressive refinement strategy, wouldn't it be useful to base the exploration on the gradient of heatmap value wrt query location? This seems cheaply differentiable and could give a more direct path towards the peak.
Since the target is always a Gaussian, how does the network output at test time look? Are less certain predictions wider in distribution?
As the queries are given as coordinates to the MLP, would it be possible to query truncated joints outside the heatmap bounds?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors provide an adequate "Limitations" section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. Below we address all the concerns.
***W1: Some important alternative techniques from the literature are not discussed, and this impacts the conclusions. First, ...***
Here we analyze and compare our proposed method HerPE with the alternative techniques mentioned by the reviewer. As for Integral and its extension methods, from the perspective of supervision signals, they are essentially classified as coordinate regression, which is why they are not troubled by quantization errors. The original intention of these methods to construct a heatmap-like form is to narrow the gap between coordinate regression and heatmap regression, rather than to overcome the limitations of the coarse downsampled grid. In contrast, our work aims to propose a more advanced heatmap representation scheme to improve the performance of most heatmap-based methods. PersonLab consists of heatmap regression and offset prediction. Although PersonLab can achieve sub-pixel positioning with the help of offset prediction, its performance still depends on the quality of heatmap regression. When the output resolution of the heatmap and offset field is not high enough, the performance of PersonLab is also unsatisfactory. Therefore, our continuous heatmap representation also applies to PersonLab.
***W2: The statement around L305 is not correct, the radius of the Gaussian does not have to be an integer for the gridded heatmap ...***
There is a mismatch between theory and practice. Most studies on heatmap-based human pose estimation only describe heatmap generation at a conceptual level using words or formulas, without discussing implementation in their papers. According to the official open source code, placing a fixed Gaussian kernel on the heatmap plane (i.e., integer Gaussian radius) is a standard operation in many influential works (such as SimpleBaseline, HRNet, HRFormer, ViTPose). To eliminate ambiguity, we will emphasize this detail in our paper and state that not all heatmap-based methods have this problem.
***Q1: Why is the proposed method not tested also on resolution 384x288? This would allow better comparability in Table 2. Further, ...***
We report the results of NerPE on the COCO test-dev set with the input resolution of 384x288 as follows, to supplement Tab. 2. The reason for the different performance gap between NerPE and the comparison methods is that the input resolution is the same in Tab. 1 and different in Tab. 2.
| Backbone | AP | AP$_{50}$ | AP$_{75}$ | AP$_{M}$ | AP$_{L}$ | AR |
| --- | --- | --- | --- | --- | --- | --- |
| HRNet-W48 | 76.2 | 92.6 | 83.6 | 72.8 | 82.0 | 81.2 |
***Q2: Were other alternatives considered for the implicit function, such as learning a distance field? As for the progressive refinement strategy, ...***
There is no competition between our work and technologies such as distance fields. The purpose for us introducing implicit neural representations in human pose estimation is to achieve the transition from discrete to continuous. We use heatmap representation as the initial object to demonstrate the benefits of continuity, and the achievements can be generalized to other types of values with 2D pixel arrays, also applicable to the distance field. In fact, the reviewer’s suggestion to use the gradient of heatmap value is to combine heatmap regression and offset prediction from the perspective of implicit neural representations.
***Q3: Since the target is always a Gaussian, how does the network output at test time look? Are less certain predictions wider in distribution ?***
The trained NerPE is able to represent continuous heatmap distributions. The most direct way to obtain the coordinates of body joints is to query the confidence scores on the grid points according to the required resolution. Without retraining, visualization of the network output at different resolutions is given in **Fig. A2 of the attached PDF**.
***Q4: As the queries are given as coordinates to the MLP, would it be possible to query truncated joints outside the heatmap bounds ?***
Unfortunately, in the current experimental setup, it is not possible to localize body joints that are outside the heatmap bounds due to differences in data distribution. If supervision information on truncated joints is provided for training, it is theoretically feasible to query whether there are body joints at a certain position outside the image range.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply to my review.
> The original intention of these methods to construct a heatmap-like form is to narrow the gap between coordinate regression and heatmap regression, rather than to overcome the limitations of the coarse downsampled grid.
The issue of quantization errors (coarseness of the grid) is an explicit motivation for soft-argmax approaches [a] and [b], so there is considerable motivational overlap.
[a] Nibali et al., Numerical Coordinate Regression with Convolutional Neural Networks. arxiv:1801.07372, 2018
[b] Sun et al., Integral Human Pose Regression. ECCV, 2018
> Although PersonLab can achieve sub-pixel positioning with the help of offset prediction, its performance still depends on the quality of heatmap regression.
This may be true, however the offsets do enable a continuous output without quantization errors (not bound to grid points).
> When the output resolution of the heatmap and offset field is not high enough, the performance of PersonLab is also unsatisfactory.
I would expect that NerPE's performance is similarly impacted if the underlying feature map has lower resolution (even though the implicit heatmap is continuous). Is it not so?
Overall, I believe it would be important to discuss both of these prior lines of works in the paper. Even if they aren't doing the same thing, the motivation is similar enough.
> We report the results of NerPE on the COCO test-dev set with the input resolution of 384x288 as follows
Thank you, this result is convincing.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind reply!
> Overall, I believe it would be important to discuss both of these prior lines of works in the paper. Even if they aren't doing the same thing, the motivation is similar enough.
We will mention the soft-argmax approaches and PersonLab in Related Work to show the difference in solving the quantization error problem.
> I would expect that NerPE's performance is similarly impacted if the underlying feature map has lower resolution (even though the implicit heatmap is continuous). Is it not so?
For images with different resolutions and different backbone networks, we resize the encoded feature map $Z$ to 8×8 (L233), which means that each local feature vector $z^*$ is responsible for a 64-equally divided area on the 2D plane, independent of the resolution.
If you have further questions, we are happy to discuss them. | Summary: This paper mainly studies the quantization problem of discrete representation of heatmap, especially in the case of low resolution. It proposes to use a continuous Implicit Neural Representation (coordinate-input MLP) to coarse-to-fine query to generate heatmap at arbitrary resolution, which achieves better performance than the current common heatmap method at lower resolution.
Strengths: - Heatmap quantization error has always been an important research topic, esp. at not high resolution.
- This paper is well-motivated and written in a simple and easy-to-understand way.
Weaknesses: I will give the boardline first, and then want to see how the author responds to questions about method details, clarification of experimental settings, insufficient experiments, etc. before making a decision.
[Method]
- For training (Fig. 2), why not consider adding additional supervision in the area near the ground truth or even progressive coordinate decoding to refine the heatmap prediction near the ground truth? Will not this give better results?
- So does the final 2D coordinate input into the INR MLP use positional embedding (L312)?
- The bias of confidence estimation comes not only from discretization but also from heuristic confidence estimation [a]. From the gap (confidence) between AP (accuracy + confidence) and AR (accuracy) in Tab. 1, we could see that confidence calibration is still not good enough, and this limitation should be mentioned in the Sec. 5 Limitation & Future Work as well.
[Experiments]
- Does the backbone of this method use COCO/heatmap pre-trained or ImageNet checkpoint? Please describe in detail.
- I don't understand the meaning of the OR/IR column (Tab. 1). Why is HRNet's OR/IR constant at 1/4 and does not change with the input size?
- I don't seem to find GFLOPs, AP and AR results comparing different heatmap resolutions of the heatmap-based baseline method under a fixed input size? They may need to be added in Tab. 1.
- In addition to 256x192, it is also necessary to conduct experiments on a higher resolution 384x288 standard setting to test the applicability of the method's solving quantization error. It seems that the improvement of the method at high resolution is minor.
- How are the overhead savings and performance improvements on the newer heatmap-based method ViTPose [48]? Because ViTPose is powerful, and improving it will make this work more widely applicable.
- There are also some methods that use the heatmap + offset prediction method [b], which also needs to be discussed.
- If I understand correctly, I guess that progressive coordinate decoding may perform similarly to a fully generated heatmap in easier cases, and may only find local minima in more difficult cases (such as multi-peak, non-Gaussian non-uniform heatmaps). If this is the case, then splitting the test by difficulty (such as number of joints, human size, etc.) [c] may be more clear.
- Since the values don't match, are the settings of Tabs. 5 & 6 ablation studies (82.65) different from Tab. 3 (87.7)?
- If the division in Tab. 5 is 16x16 or even higher resolution, the result should be improved further, though the cost gradually approaches the heatmap-based method? Could you add more reports in these regions (not necessarily done during the rebuttal)?
- The author mentioned that the difference in local feature vectors at the cell junction will make the predicted heatmap not smooth, so they proposed to use bilinear interpolation of local features (Ls202-207). Could we perform an ablation study to see the difference in results before and after and visualize the heatmap difference? I think we may also further consider the idea of sparse convolution to fuse neighborhoods.
- It is recommended that the ablation study in Tab. 7 be performed more standardized on COCO 128x128 (not necessarily made during the rebuttal period) because of CrowdPose biases towards crowded situations. The 64x64 input resolution is too small and does not seem to have much application in practice.
- Lack of heatmap visualization and comparisons with other methods
References:
[a] On the calibration of human pose estimation. ICML, 2024.
[b] Towards accurate multi-person pose estimation in the wild. CVPR, 2017.
[c] Benchmarking and error diagnosis in multi-instance pose estimation. ICCV, 2017.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors state their limitations as 1. the decoder structure not studied in depth, 2. discrete heatmaps still required to generate and considered as not elegant. Besides, some limitations mentioned in the weaknesses are suggested to include as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. Below we address all the concerns.
***W1: For training (Fig. 2), why not consider adding additional supervision ...***
Theoretically, additional supervision in the area near the ground truth can improve the performance of NerPE, but it also applies to discrete heatmap regression, which does not highlight the advantage of continuous representation. As for progressive coordinate decoding, it is only performed during inference.
***W2: So does the final 2D coordinate input into the INR MLP use positional embedding (L312) ?***
We use a common high-frequency function $\gamma(x)=(\cdots,\sin (2^{i} \pi x),\cos(2^{i} \pi x),\cdots)$, to encode coordinates as sinusoidal embeddings. The relevant description will be added to the paper.
***W3: The bias of confidence estimation comes not only from discretization but also ...***
Since NerPE is designed for quantization errors, the performance gains from confidence calibration over existing methods increase as the input resolution decreases. Our continuous representation does not conflict with the correction for heuristic confidence estimation. We will mention this in Limitation & Future Work.
***W4: Does the backbone of this method use COCO/heatmap pre-trained or ImageNet checkpoint?***
NerPE has the same backbone and loss function as its baseline. Therefore, the training process is consistent with its baseline, where the model is initialized with the pre-trained weights from ImageNet.
***W5: I don't understand the meaning of the OR/IR ...***
OR is the output resolution and IR is the input resolution, which is explained in the caption of Tab. 1. Since HRNet belongs to explicit neural representation, once its network structure is determined, OR/IR becomes a constant.
***W6: I don't seem to find GFLOPs, AP and AR results ...***
We report the efficiency and performance in **Tab. A3 of the attached PDF**. The high amount of computation is the disadvantage of implicit neural representations (INR) when generating high-resolution signals. The results show that we reduce the GFLOPs from 73.6 to 9.5 by progressive coordinate decoding.
***W7 & W8: It is also necessary to conduct experiments on a higher resolution ... & How are the overhead savings and performance improvements on ViTPose ?***
Quantization errors caused by discretization decrease as the input resolution increases. Naturally, NerPE achieves greater improvements at low resolution than at high resolution. When the input resolution is 388×284, the accuracy of NerPE with HRNet and ViTPose as the backbone on the COCO test-dev set is as follows.
| Backbone | AP | AP$_{50}$ | AP$_{75}$ | AR |
| --- | --- | --- | --- | --- |
| HRNet-W48 | 76.2 | 92.6 | 83.6 | 81.2 |
| ViTPose-B | 76.8 | 92.6 | 84.3 | 81.8 |
***W9: There are also some methods that use the heatmap + offset prediction method ...***
The heatmap + offset prediction method can indeed achieve sub-pixel positioning accuracy but it still depends on the quality of heatmap regression. When the resolution of heatmaps and offset fields is not high enough, its performance is also unsatisfactory. Therefore, our method not only does not form a competitive relationship with it but can even be combined with it.
***W10: If I understand correctly, I guess that progressive coordinate decoding ...***
I agree. The intention of progressive coordinate decoding is to find a simple way to accelerate inference at the cost of slight performance degradation, as reported in Fig. 3. Indeed, thanks to decoupling from heatmap resolution, there are some better potential implementations for coordinate decoding, as the reviewer mentioned.
***W11: Since the values don't match, are the settings of Tabs. 5 & 6 ablation studies (82.65) different from Tab. 3 (87.7)?***
The backbone of NerPE in Tab.3 is HRNet-W32 and in Tabs. 5 & 6 is ResNet-50, which is given in the description of the corresponding experiments but may not be eye-catching enough. We will highlight it in the caption.
***W12: If the division in Tab. 5 is 16x16 or even higher resolution, the result should be ...***
The results to divide the cells into 16×16 and 64×64 are shown below. Combined with Tab. 3, we find that 16×16 is indeed a better choice than 8×8. When it continues to increase to 64×64, the positioning accuracy decreases. This is because the over-fine division causes the degradation of NerPE. Since the confidence score in each region changes marginally, INR is more focused on image features than relative coordinates.
| | 16×16 | 64×64 |
| --- | --- | --- |
| w\o uniform | 83.13 | 82.17 |
| uniform | 83.07 | 82.25 |
***W13: The author mentioned that the difference in local feature vectors at the cell junction will ...***
The local ensemble is quantitatively proven to be effective for local INR in its original paper. In **Fig. A1 of the attached PDF**, we supplement the qualitative ablation of local resemble by visualization. Sparse convolution is not suitable for INR, because the positions queried during training do not spatially meet the form of 2D pixel arrays.
***W14: It is recommended that the ablation study in Tab. 7 be performed more standardized on COCO 128x128 ...***
CrowdPose has higher requirements for heatmap generation. The $\sigma$ of Gaussian distribution should not be set too large, otherwise it will be difficult to distinguish the same kind of joints from different instances when they are close. Therefore, CrowdPose is more worthy of study to determine the scale parameters $\sigma$ and $b$. Thanks for the reviewer's suggestion, we will perform ablation with input size of 128x128.
***W15: Lack of heatmap visualization and comparisons with other methods.***
Compared with the existing methods with fixed output sizes, NerPE can generate the predicted heatmaps at arbitrary resolutions. Without retraining, visualization of NerPE's outputs at different resolutions is given in **Fig. A2 of the attached PDF**.
---
Rebuttal 2:
Comment: I carefully read the reviews of other reviewers and the author's careful rebuttal. I sincerely thank the author for addressing most of my concerns carefully, especially the experiments on GFLOPs, inference latency of FPS, different input and cell resolutions, SOTA backbones, etc., which help to have a more comprehensive understanding of the proposed method. The idea of INR is very straightforward, especially achieving good performance in low-resolution experiments. So if the author can promise in addition to the rebuttal,
1. to add comparison and combination experiments with the "offset" method later (see W9 and other reviews),
2. to add some more visualization of non-standard Gaussian heatmaps, such as the difference between the rendered and baseline heatmaps for medium and hard cases [c] (Ws 10 & 15),
3. Open source is encouraged to facilitate reproduction for the community.
then I will consider to raise my rating to **borderline** accept (**conditional**).
In addition, there are some follow-up questions:
**W13:** Can you explain why sparse convolution is not considered, which has been explored in 3D reconstruction and generation?
Title: Conditional Borderline
---
Rebuttal Comment 2.1:
Comment: We sincerely thank the reviewer for the kind reply and willingness to improve the score. We will supplement the experiments related to offset prediction and more visualization of non-standard Gaussian heatmaps in the next/final revision of our paper, as suggested by the reviewer.
> Can you explain why sparse convolution is not considered, which has been explored in 3D reconstruction and generation?
If we understand correctly, in 3D reconstruction and generation, continuous 3D data (i.e. point clouds) needs to be voxelized into a grid structure to meet the requirements of sparse convolution. This process is similar to heatmap discretization in human pose estimation, which is exactly what we want to avoid. Back to our proposed NerPE, during training, the 2D coordinates as one of the inputs are randomly sampled in the image plane, so they exist in the form of a set in space and cannot be processed by convolution operations. This is why implicit neural representations used to model continuous signals are mostly implemented by MLP even when the task is image-related. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and their constructive feedback. We appreciate their assessment of our work NerPE as a "well-motivated" approach for "an important research topic" (xx7e), “a more principled solution to producing arbitrary-resolution heatmaps” (k3b9), a "new idea introduces implicit neural representations" (KNNC), and a "well-justified" approach to "address the weakness of prior works" (rvLc). All reviewers seemed to agree that our work is "good writing" (rvLc), "easy-to-understand" (xx7e, KNNC), and "clearly structured" (k3b9).
Inspired by the reviewers’ helpful comments, we will incorporate the following changes into the next/final revision of our paper:
- We test the performance of NerPE at a higher resolution (384x288) and with another backbone (ViTPose) on the COCO test-dev set (see Tab. A1 of the attached PDF).
- We report more about the division of cells in NerPE (see Tab. A2 of the attached PDF).
- We compare the efficiency and performance of NerPE and existing methods (see Tab. A3 of the attached PDF).
- We visualize the ablation of local resemble in NerPE (see Fig. A1 of the attached PDF).
- We show the heatmaps output by NerPE at different resolutions (see Fig. A2 of the attached PDF).
Pdf: /pdf/c50297126fb4bd9cefbb1463e0df4a6d796f6071.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neuronal Competition Groups with Supervised STDP for Spike-Based Classification | Accept (poster) | Summary: This paper aims to enhance the classification capabilities of SNNs using STDP learning rule. The authors introduce the Neuronal Competition Group architecture to address the limitations of existing WTA mechanisms in supervised STDP classification. This architecture promotes balanced intra-class competition and improves class separation by employing a two-compartment threshold system. The paper demonstrates the effectiveness of NCGs in achieving accuracy improvements on some relatively large datasets like CIFAR-100.
Strengths: 1. The focus on local learning and the time-to-first-spike coding is highly encouraged. It improves energy efficiency in both training and inference parts, which is crucial for neuromorphic hardware applications.
2. The extensive experiments show the effectiveness of the proposed framework. Compared with other local learning rules, the proposed one achieves better performance.
Weaknesses: Global learning methods, such as surrogate gradient and ANN-to-SNN conversions, achieve significantly better performance. Consequently, in terms of accuracy, the proposed method is far from attractive. Could the authors provide a quantitative comparison between global learning methods and the proposed method across different aspects (energy, hardware implementability, etc.) to highlight the advantages of the proposed approach?
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the latency used in the paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The scalability of the proposed method to larger and more complex datasets beyond CIFAR-10 and CIFAR-100 is not thoroughly explored. Future work could address how well the NCG architecture scales with increasing data complexity and size.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We appreciate their encouragement regarding local learning and time-to-first-spike coding. We respond to their comments below.
> Global learning methods, such as surrogate gradient and ANN-to-SNN conversions, achieve significantly better performance. Consequently, in terms of accuracy, the proposed method is far from attractive. Could the authors provide a quantitative comparison between global learning methods and the proposed method across different aspects (energy, hardware implementability, etc.) to highlight the advantages of the proposed approach?
We appreciate the reviewer's useful suggestion, which helps highlight the value of our methods. We have added a section in Supplementary Material to compare our methods to global-based approaches. We included the content of this section in the **Author Rebuttal** answer (Section 3). We emphasized that, although our methods lag behind global-based algorithms in terms of accuracy, they offer benefits in computational and memory costs, energy efficiency, and better compatibility with on-chip training on ultra-low-power neuromorphic hardware.
> What is the latency used in the paper?
As stated in Section 3.1, firing timestamps are represented by floating-point values to align with event-driven neuromorphic hardware. We converted each normalized input feature $x$ into a spike timestamp as follows: $T\left(x\right) = 1 - x$. Hence, we can not quantify the latency in terms of timesteps. Our spike encoding procedure is detailed in Supplementary Material (Section 2.1).
> The scalability of the proposed method to larger and more complex datasets beyond CIFAR-10 and CIFAR-100 is not thoroughly explored. Future work could address how well the NCG architecture scales with increasing data complexity and size.
We agree with the reviewer that our work would benefit from evaluation on more complex tasks beyond CIFAR-100. However, dealing with datasets like ImageNet remains a significant challenge for local-based approaches, particularly for networks combining unsupervised and supervised learning. As mentioned in the paper, we found no SNN work based on local learning rules, whether fully-supervised or semi-supervised, reporting results on CIFAR-100 or more complex datasets. Regarding ANNs, SoftHebb-CNN (Journé et al., Hebbian Deep Learning Without Feedback, CVPR 2023) reports SOTA performance on ImageNet (for local-based semi-supervised methods), with an accuracy of only 27.3% (they used a 5-layer CNN trained using Softhebb followed by a classification layer trained using GD). We believe that additional supervised layers (together with unsupervised feature learning) are required to address this type of task. Hence, in the future, we will extend our NCG architecture to hidden layers while preserving the local properties needed for on-chip training. We discuss the extension of NCGs to hidden layers in more detail in the **Author Rebuttal** answer (Section 1).
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses my concerns. I raise my score to 6. | Summary: This paper introduces Neuronal Competition Groups (NCGs) with a two-compartment threshold mechanism to optimize Winner-Takes-All competition in spiking classification layers using supervised STDP. NCG integration with supervised STDP rules significantly boosts image recognition accuracy on CIFAR-10 and CIFAR-100 datasets, showcasing balanced competition and improved class separation.
Strengths: The authors proposed a supervised STDP method to achieve better results on this type of task.
The authors proposed a method for NCG that attempts to solve the problems encountered in STDP training.
Weaknesses: The number of experiments was not sufficient; he compared only a few exist methods.
stdp is an unsupervised algorithm, and it is hard to associate it with supervised methods. Although there are some supervised stdp methods before, it is hard to believe its rationality.
The features of the authors' method are extracted by other methods and do not seem to have been designed or trained by themselves. That is to say, the author's work is only limited to the classification layer.
The methods of stdp are not comprehensive, especially unsupervised, and should be investigated more widely.
Technical Quality: 2
Clarity: 2
Questions for Authors: Can the author's method be extended to the previous feature extraction layer.
The authors' experiments seem to have only a few comparisons of methods.
The experiments of the authors do not seem to have more analysis about the more detailed features of the method proposed by the authors. Experiments are now limited to the results of classification, and very little analysis of method properties.
stdp itself is an unsupervised local algorithm, so what is the point of extending it to supervised algorithm? If it no longer meets the local property and unsupervised property, then why not use other supervised algorithm which is better.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and their time reviewing our work. We respond to their comments below.
> stdp is an unsupervised algorithm, and it is hard to associate it with supervised methods. Although there are some supervised stdp methods before, it is hard to believe its rationality.
Although it has not received as much attention as other learning methods, error-modulated STDP (or supervised STDP) has a solid foundation in neuroscience [1]. For example, in our brain, neuromodulators (such as dopamine) are known to influence synaptic plasticity. Our results, along with those of other papers (e.g. EMSTDP, GLSNN), show encouraging performance, which provides empirical evidence of its rationality. We believe this area of research deserves more exploration. Also, while we integrate our methods on top of supervised STDP rules, the contributions of this work are not focused on supervised STDP but rather on the training architecture (NCG) and the competition regulation mechanism (two-compartment threshold adaptation). In the revised paper, we showed that our STDP rule resembles a gradient-based or delta rule and that our methods can be used with other learning rules. We refer the reviewer to the **Author Rebuttal** answer (Section 2) and our response to the first question from reviewer **ygD7** for additional details.
[1] Frémaux et al. Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules. Front. Neural Circuits, 2015.
> The features of the authors' method are extracted by other methods and do not seem to have been designed or trained by themselves. That is to say, the author's work is only limited to the classification layer.
Indeed, our work focuses on the classification layer, as stated in the introduction. While features are extracted using existing methods, we implemented and trained the feature extractors by ourselves. We believe that using established methods for feature extraction, rather than custom approaches, is not a weakness. On the contrary, it demonstrates that our work is more likely to work with any kind of feature extractor and may motivate other researchers to use our classification layer on top of their own feature extractors.
However, we agree with the reviewer that the proposed NCG architecture is currently limited to the classification layer. We refer them to the **Author Rebuttal** answer (Section 1), which discusses this limitation.
> The methods of stdp are not comprehensive, especially unsupervised, and should be investigated more widely.
Given the limited number of pages, we have chosen not to study these components as they are not part of our contributions. We invite readers to read the papers on the supervised STDP rules (references 22,24,29 in the manuscript) and the unsupervised feature extractors (references 20,52) employed to get a better understanding of these methods.
> Can the author's method be extended to the previous feature extraction layer.
The feature extraction layer is trained in an unsupervised fashion. Our methods are designed for supervised training: the NCGs require labeling each neuron to a class and the competition regulation mechanism must be aware of the input class. Hence, we cannot employ our methods for this layer. Also, it is important to recall that our work aims to address the unbalanced competition issues encountered in WTA-based supervised learning. In unsupervised learning, WTA competition has been widely studied in the literature (references 19,20,30,31 in the manuscript) and numerous solutions exist to solve the aforementioned issues.
> The authors' experiments seem to have only a few comparisons of methods.
For the accuracy comparison, Table 1 includes three methods: R-STDP, SSTDP, and S2-STDP. As far as we know, these three methods are the only existing supervised STDP rules designed for training SNNs with one spike per neuron (as mentioned in the paper). Therefore, there was no other relevant method available for comparison that could have been implemented in our classification layer. However, in the second paragraph of Section 5.2, we compared our methods with SOTA local-based approaches (comprising BP-based approaches and SNNs with multiple spikes per neuron). We showed that our results closely match or surpass fully-supervised SOTA work and outperform semi-supervised SOTA work.
> The experiments of the authors do not seem to have more analysis about the more detailed features of the method proposed by the authors. Experiments are now limited to the results of classification, and very little analysis of method properties.
Our main contribution relies on promoting, in a supervised context, the learning of various patterns per class through competition regulation. We analyzed the qualitative impact of NCGs with our competition regulation mechanism in Section 5.4 of the main paper and Section 3.5 of the Supplementary Material. In particular, we covered the number of updates (target/non-target) received by neurons during training, neuron behavior with labeling, and the similarities and differences between learned weights. We believe we have discussed the main properties of our methods that support their effectiveness in learning various patterns per class.
> stdp itself is an unsupervised local algorithm, so what is the point of extending it to supervised algorithm? If it no longer meets the local property and unsupervised property, then why not use other supervised algorithm which is better.
We refer the reviewer to the **Author Rebuttal** answer (Section 2), which explains why we use supervised STDP. Also, note that supervised STDP maintains its local property when used to train a classification layer since the error is computed on the output of the layer.
---
Rebuttal Comment 1.1:
Comment: I'm still in a confusion of the author's approach to the classification layer. But I thank the author for his careful rebuttal, so I raise the score to 5.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reconsidering our work and will be happy to clarify any remaining questions regarding our methods. | Summary: The authors proposed the Neuronal Competition Group (NCG) and a novel competition regulation mechanism based on two-compartment thresholds, to effectively implement intra-class WTA competition in a spiking classification layer employing first-spike coding and supervised STDP training, which improves classification capabilities by promoting the learning of various patterns per class. The results on several image recognition datasets such as CIFAR-10 and CIFAR-100, demonstrate the effectiveness of the proposed methods.
Strengths: 1. The authors proposed the Neuronal Competition Group (NCG) for effective intra-class WTA competition in a spiking classification layer employing first-spike coding and supervised STDP training.
2. The authors proposed a novel competition regulation mechanism based on two-compartment thresholds to promote balanced competition and fair decision-making.
3. The proposed methods significantly increased the accuracy of SOTA supervised STDP rules on several common datasets.
Weaknesses: 1. It lacks clear explanation why do we need to promote the learning of various patterns per class (intra-class competition), because the final aim is to separate samples from different classes.
2. The authors talk about the advantage of proposed methods for neuromorphic hardware, which needs more analyses or discussion. Besides, its unfavorable impacts should also be considered, such as using high-precision floating-point values to represent firing timestamps.
3. The classification accuracies lag behind BP-based models (e.g., see “Spike-driven Transformer” by Yao et al., NeurIPS 2024), which are usually trained on GPUs. And the trained model can be deployed on neuromorphic hardware for inference, which is also highly energy-efficient.
4. The proposed methods improve the performance with more cost, so it’s better to add analyses or discussion about this tradeoff.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It may benefit better understanding to include some important parts and brief pipeline of the training algorithm in the main text, such as the definition of the average firing time.
2. The definition and role of important hyperparameters should be described in the main text.
3. The right part of Figure1 is not clearly explained, especially the “Error” indication. Besides, inhibited spikes should not exist in the “y≠1” condition?
4. See the weaknesses listed above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: An additional limitation maybe the effectiveness of the proposed methods on large-scale datasets like ImageNet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and value their enthusiasm for our work.
**W1:**
We clarified this aspect in Section 4.1: "Different samples from a given class can contain distinct, mutually exclusive patterns or combinations of patterns. Learning all these patterns concurrently with one neuron may be challenging and impose strong generalization constraints (especially when using a single layer). Learning various patterns per class reduces these constraints and enables the emergence of more specialized patterns that better represent the training distribution. Since the prediction is based on the neuron that fires first, having multiple neurons per class increases the likelihood of a target neuron firing first for a sample of the class, thereby improving class separation.".
**W2 & W3:**
In the revised manuscript, we compared our methods with global-based approaches across several aspects, including hardware implementation. We refer the reviewer to the **Author Rebuttal** answer (Section 3), which includes the content of this section.
Note that training models on neuromorphic hardware offer several advantages, such as energy-efficient alternatives to GPU-based training and support for continuous learning. Also, representing firing timestamps by floats is not a limitation because, as mentioned in the paper, we are aiming for event-driven neuromorphic hardware, where spike times are not constrained by a hardware clock.
**W4:**
We discussed the limitation in Section 6 of the revised paper: "First, while NCGs successfully improve the performance of a classification layer, they also increase the costs in terms of parameters, computation, and hardware. The computational overhead of NCGs scales linearly with the number of neurons. In hardware design, they introduce another overhead due to the additional connections, both to the previous layer and within the layer.".
We analyzed this tradeoff in more detail in Supplementary Material (Section 3.2 - Impact of the Number of Neurons): "[...] Figures 1 and 2 show that significant accuracy gains can usually be achieved with only three neurons per class. This reveals that our method remains effective even with a minimal number of additional parameters. When selecting the value of $M$, the tradeoff between optimizing accuracy and minimizing parameter cost must be considered.".
**Q1:**
We included the definition of the average firing time and weight normalization. Unfortunately, we do not have enough space to add a brief pipeline of the training algorithm in the main text. We direct readers to the complete algorithm in Supplementary Material.
**Q2:**
In the revised paper, we described the role of important hyperparameters:
- The firing threshold $\theta$ influences the number of integrated input spikes needed to classify a sample. It should be chosen in conjunction with the initial weight distribution.
- The time gap $g$ controls the distance between the desired timestamps and the average firing time. Its optimal value depends on the nature of the input data and the maximum firing time.
- The positive and negative learning rates ($A^+$ and $A^-$) control learning speed and determine the relative importance of long-term potentiation ($A^+$) versus long-term depression ($A^-$) in the learning process. The optimal ratio $A^+/A^-$ depends on the number of integrated input spikes at spike time.
- Increasing the number of neurons per class $M$ enables the learning of more diverse class-specific patterns but incurs additional costs in terms of parameters and computation.
- The threshold learning rate $\eta_{\mathrm{th}}$ defines the strength of competition regulation: higher values favor more balanced competition but may deteriorate pattern learning as training thresholds ($\theta'$) tend to increase progressively within an epoch. It should be chosen in conjunction with the firing threshold.
- The threshold annealing $\beta_{\mathrm{th}}$ influences the number of epochs during which competition regulation occurs. It should also be adjusted according to the threshold learning rate: higher threshold learning rates necessitate lower threshold annealing.
**Q3:**
The error arrow corresponds to any temporal error used to control the sign and intensity of the STDP update. Its value depends on the supervised STDP rule employed, which means that we cannot be more specific about it. In the figure, only the direction of the error is relevant. Also, inhibited spikes are shown for clarity, even though they do not "exist" because the associated neurons are inhibited. Due to lateral inhibition, each NCG produces only one spike per sample, whether it is a target or a non-target sample. We refer the reviewer to the attached PDF file with the **Author Rebuttal** answer for the figure with the revised caption.
**Limitation:**
We agree with the reviewer that our work would benefit from evaluation on more complex tasks. However, dealing with datasets like ImageNet remains a significant challenge for local-based approaches, particularly for networks combining unsupervised and supervised learning. As mentioned in the paper, we found no SNN work based on local learning rules, whether fully-supervised or semi-supervised, reporting results on CIFAR-100 or more complex datasets. Regarding ANNs, SoftHebb-CNN (Journé et al., Hebbian Deep Learning Without Feedback, CVPR 2023) reports SOTA performance on ImageNet (for local-based semi-supervised methods), with an accuracy of only 27.3% (they used a 5-layer CNN trained using Softhebb followed by a classification layer trained using GD). We believe that additional supervised layers (together with unsupervised feature learning) are required to address this type of task. Hence, in the future, we will extend our NCG architecture to hidden layers while preserving the local properties needed for on-chip training. We discuss the extension of NCGs to hidden layers in more detail in the **Author Rebuttal** answer (Section 1).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns, I will keep the score. | Summary: The paper proposes Neuronal Competition Group (NCG), an architecture that maps each class in the task to a group of neurons using intra-class Winner-Takes-All (WTA) and competition regulation. The authors aim to implement effective WTA competition mechanisms in spiking neural networks (SNNs) employing first-spike coding and Spike Timing-Dependent Plasticity (STDP) learning rules.
Strengths: The paper is well-written, and many preliminary concepts are discussed, making it easy for readers to follow the discussion. The mechanisms discussed, such as WTA, lateral inhibition, STDP, and modulated STDP, have a solid foundation in neuroscience. Moreover, the experiments have been tested on fairly complex tasks, such as CIFAR-100, demonstrating the proposed method's effectiveness. The ablation studies provide compelling support for the design choices made for the method.
Weaknesses: The main limitation of the proposed method and experiments is that essentially all the ideas used in the method have been previously explored, so the contributions might be limited to combining them and making them work together. Additionally, a major weakness is that the method only works for a model with one layer of spiking neurons, and this layer is the last layer in the model. It is not clear why using STDP would be a better alternative than simply using a delta rule, where weight updates are just the product of the classification error and the inputs. The proposed method also requires a significant increase in the number of parameters in the last layer (by a factor of 5 in the experiments). Furthermore, all the experiments focus exclusively on static image classification tasks where no temporal information is available, which are not the most suitable tasks for SNNs. It is well known that artificial neural networks (ANNs) perform significantly better in such tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the authors present a comparison of the method using STDP versus a simple delta rule over the group of neurons in the classification layer?
- Could the authors present some results for sequential tasks where using SNNs could be more compelling than using other recurrent models?
- Could the authors elaborate on why using STDP in the classification layer is important? Local learning methods are useful for learning in deep layers since using backpropagation on custom hardware is expensive, but the last layer does not suffer from this problem.
- Could the authors discuss any computational benefits of using the proposed methods over others?
- Could the authors discuss the relationship between accuracy performance and the number of neurons per NCG beyond the five units used in the experiments?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The main limitation of the work is that the proposed method targets only the last layer of a neural network, so it is unclear why it would be a preferable alternative to other simpler methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive remarks, which have helped us improve the value of our work.
> The main limitation of the proposed method and experiments is that essentially all the ideas used in the method have been previously explored, so the contributions might be limited to combining them and making them work together.
We agree with the reviewer that the main concepts behind our contributions (WTA competition and threshold adaptation) are not new and have a solid foundation in neuroscience. However, we argue that these concepts are broad and have mostly been explored in unsupervised learning contexts. The literature exploring WTA for classification is very limited (to R-STDP) and faces unbalanced competition issues. To the best of our knowledge, our work is the first to focus on these issues and adapt the aforementioned concepts for supervised competitive learning, which extends beyond combining them and making them work together. We introduced, for the first time, a competition regulation mechanism (through threshold adaptation) that accounts for the input class information. Also, while intra-class WTA was introduced in a recent paper (see ref. 29 in the manuscript), it was limited to learning target and non-target patterns. In this work, we introduce a similar concept but with a different aim: learning various class-specific patterns. We believe our work is important for the future development of WTA competition in supervised learning.
**Q1:**
In the revised paper, we added a section discussing the benefits of STDP compared to other rules. The content is as follows:
To train the spiking classification layer, we employ an error-modulated additive STDP in which the weight change is the product of the error and the learning rate (see Eq. 3 in the main paper). The learning rate is positive for long-term potentiation (i.e. when the input neuron fires before the output neuron) and negative for long-term depression (i.e. when the input neuron fires after the output neuron). Given the simplicity of this STDP model, the weight updates without long-term depression resemble a gradient-based rule or a delta rule $^1$. To better understand the relevance of STDP with respect to this type of rule, we examine the accuracy of S2-STDP+NCG with and without long-term depression in Table 1 (the table is in the PDF attached with the **Author Rebuttal** answer). Results indicate that incorporating long-term depression generally leads to a slight improvement in accuracy. This improvement may be due to STDP considering all input spikes for weight updates. In addition, long-term depression enables faster training convergence by increasing the number of weight updates per sample. The number of epochs with long-term depression is reduced by an average of 15% for STDP-CSNN and 4% for SoftHebb-CNN.
$^1$ *that uses the same error as the STDP rule and a Heaviside function to convert input spikes into a continuous signal.*
**Q2:**
Our work would indeed benefit from results on datasets with a temporal dimension. Nonetheless, the feature extraction networks employed in this work are not designed for such datasets, which means that we need to find another Hebbian-based feature extractor capable of extracting temporal/spatiotemporal features. We will not be able to get results before the end of the rebuttal.
From a theoretical perspective, we believe that NCGs would still improve class separation in sequential tasks. Our architecture improves performance by promoting the learning of various patterns per class. If a classification layer with one neuron per class can classify temporal/spatiotemporal features, it should, when augmented with NCGs, also be capable of doing so with multiple neurons per class.
**Q3:**
We refer the reviewer to the **Author Rebuttal** answer (Section 2), which explains the importance of STDP in the classification layer.
**Q4:**
We refer the reviewer to the **Author Rebuttal** answer (Section 3), which compares our methods to global-based approaches.
**Q5:**
In Supplementary Material (Section 3.2), we studied the impact of varying the number of neurons per class ($M$). We found that smaller values ($M=5$) are optimal for simple datasets (MNIST, Fashion-MNIST). However, larger values ($M=10$) can further increase the performance of NCGs for harder datasets (CIFAR-10). On CIFAR-10 with STDP-CSNN, S2-STDP+NCG obtained an accuracy of 66.41 with $M=5$ and 67.17 with $M=10$. SSTDP+NCG obtained an accuracy of 64.05 with $M=5$ and 64.77 with $M=10$.
We are aware that NCGs improve performance with an additional parameter cost. In the revised paper, we highlighted that our methods remain effective even with a minimal number of additional parameters (3 neurons per class). We refer the reviewer to our response to **Weakness 4** from reviewer **EvbR** for additional details.
> The main limitation of the work is that the proposed method targets only the last layer of a neural network, so it is unclear why it would be a preferable alternative to other simpler methods.
It is unclear which simpler methods the reviewer is referring to but we believe they refer to training methods capable of multi-layer learning. We would like to emphasize that the contributions of this work focus on the architecture of the classification layer (NCG), rather than on the specific learning rule used to train it (supervised STDP). NCGs can theoretically be integrated into the classification layer of a multi-layer network with one spike per neuron and trained with a (non-)STDP-based method. We find it more interesting to study the gain that NCGs can bring to an existing method rather than comparing our models (e.g. S2-STDP+NCG) to other methods. However, we agree that the NCG architecture is currently limited to the classification layer and refer the reviewer to the **Author Rebuttal** answer (Section 1), which discusses this limitation.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed and thoughtful response. After considering the rebuttal, I will raise my score to 5. The NCG architecture presents an interesting approach. However, I remain concerned about its applicability and potential challenges when extending this architecture to hidden layers within a multi-layer network. Addressing this could significantly enhance the broader impact and versatility of the proposed method.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's interest in our work and for rethinking their score.
Future research will likely have to address challenges in extending our architecture to hidden layers.
However, we are confident that our contributions will be beneficial to this future work. | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive feedback from the reviewers on our submission. We have made our best efforts to address all questions and concerns, which we believe have improved the quality of our work. We have attached a PDF file with a figure and two tables to support some of our responses. Below, we respond to the commonly raised concerns.
### 1. Limitation to a single layer
In this work, we propose the NCG architecture to implement WTA-based supervised competitive learning in a spiking classification layer. It is important to recall that the literature on WTA competition for spike-based classification is very limited (to R-STDP), ineffective (due to inter-class WTA), and faces unbalanced competition issues. To the best of our knowledge, this work is the first implementation of WTA and competition regulation mechanisms specifically designed for classification. Before progressing to multi-layer networks, we intended to establish the relevance and effectiveness of such mechanisms in a single-layer supervised context. The concepts introduced in this work will benefit future research on implementing WTA-based supervised competitive learning in hidden layers:
- Hidden layers, like a classification layer augmented with NCGs, require more neurons than classes.
- Training hidden layers implies learning multiple patterns per class, which aligns with the objective of NCGs.
- Unbalanced competition issues may persist in hidden layers, making our competition regulation mechanism still relevant.
We discussed the limitation in Section 6 of the revised paper.
### 2. Importance of STDP in the classification layer
Our classification system comprises an unsupervised feature extractor and a supervised classifier. Unsupervised STDP is employed for training the feature extractor. For consistency and ease of implementation on neuromorphic hardware, the classifier should use the same type of learning rule as the feature extractor. Also, we target memristor-based neuromorphic hardware, where STDP is inherently implemented in memristor circuits (see reference 11 in the paper). Supervised STDP may therefore be realized by simply incorporating an error signal. We clarified these aspects in the introduction.
While we have justified the use of supervised STDP for this work, it is important to keep in mind that our contributions focus on the architecture, which is independent of the learning rule. Thus, NCGs may theoretically be used with any other rule designed for training SNNs with one spike per neuron. We performed preliminary experiments with S4NN [1], a gradient-based rule, and obtained similar accuracy improvements with NCGs (see Table 2 in the attached PDF). The lower accuracies on CIFAR-10 compared to S2-STDP are due to the approach used for defining the desired timestamps, not the update method (GD/STDP). We need more time to compute scores for other datasets and feature extractors but can provide them before the end of the reviewer-author discussion if requested. Since further research is needed to validate the effectiveness of NCGs with gradient-based rules, we will discuss this point in Section 6 of the revised paper and include S4NN results in Supplementary Material to support our discussion.
[1] Kheradpisheh et al. Temporal backpropagation for spiking neural networks, International Journal of Neural Systems, 2020.
### 3. Comparison with SOTA methods
In the revised paper, we included a section to compare our methods with global-based approaches regarding accuracy, computational efficiency, and hardware implementation. The content is as follows (we removed references due to space constraints):
SOTA methods for direct training of SNNs rely on backpropagation through time (BPTT) and surrogate gradient. These methods usually allow multiple spikes per neuron and support the training of very deep networks using global supervised learning. In this work, we allow one spike per neuron, train all layers with local learning (limiting our networks to shallow architectures), and use a semi-supervised training strategy, where only the last layer is trained with supervision. In terms of performance, our methods lag behind SOTA methods.
For instance, Li et al. (SEENN, NeurIPS 2023) report an accuracy of 96.44% on CIFAR-10 (our best model achieves 79.55%) and 81.65% on CIFAR-100 (our best model achieves 53.49%). This decrease in accuracy can partially be attributed to the number of layers employed (4 against 19) and the use of supervision limited to the last layer. In terms of computational and memory costs, BPTT is extremely inefficient since these costs scale with the latency (i.e. the number of time steps), whereas the costs of our methods are independent of the latency. Also, a backward pass with BPTT adjusts all synapses in the network, whereas our methods adjust only the synapses of neurons that have fired. In terms of energy efficiency, our single-spike strategy may limit the number of generated spikes significantly compared to multiple-spike methods, which reduces power consumption in both training and inference. In terms of hardware suitability, BPTT is challenging to implement on neuromorphic hardware because it relies on non-local learning. BPTT-based SNNs must be trained on GPUs, which is energy-intensive, and can be deployed on chip for inference only. To fully exploit the energy-efficient capabilities of SNNs, both training and inference should be performed on chip. We target memristive-based chips for hardware implementation of our methods. They are excellent candidates for ultra-low-power applications, potentially reducing energy consumption by several orders of magnitude compared to GPUs. Also, STDP is inherently implemented in memristor circuits, which facilitates on-chip training. There are still several challenges to address before our work can be implemented on this type of chip, such as the need for a digital module to calculate the error. This will be the focus of future work.
Pdf: /pdf/6833d9f84264c19b47ebcc47bef80a0c30fdb89a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MeshXL: Neural Coordinate Field for Generative 3D Foundation Models | Accept (poster) | Summary: The paper is about 3D mesh generative models. The generative method is done by generating triangles one-by-one (auto-regressively). Each face is represented by 3 vertices (9 coordinates).
Strengths: The idea to generate faces sounds interesting than previous methods. Unlike PolyGen which needs to generate connections (faces) between vertices, the method seems to be more elegant and easier for programming. The ShapeNet results are also convincing.
Weaknesses: The major limitation of the method is the sequence length. The method is only able to handle 800 faces with the current resources. This makes the method is nearly impossible to generate complicated meshes.
The method also looks a bit brutal. Some faces share vertices. But the method needs to generate the shared vertices multiple times. The method can also generate intersecting faces.
Technical Quality: 3
Clarity: 3
Questions for Authors: How is resolution defined? The number N (in L103) seems to be missing in the context. In L111, the authors mentioned it should be an unsigned integer. But it is still not clear what the number should be. In my experience, the number should be large (or the classification will be very difficult when training the autoregressive transformer). If the number is too small, then the resolution is limited.
I didn't find the sampling time and hardware. Sampling 7200 tokens takes lots of time. I would like to see a time analysis (and maybe memory consumption).
The training is done on Objaverse-XL but the main results are mainly about ShapeNet. The comparison is not fair. Also I can only find Fig 6 is about general objects instead of Shapenet objects. The quality seems to be very limited.
Even the authors call the method neural coordinate fields, but it is nothing about field. A field should be like a function with continuous domain (like nerf and signed distance fields). The method is similar to PolyGen. All vertices are discretized.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **W1: The limitation of sequence length.**
💡**A:** The main focus of our paper is to establish end-to-end methods with less inductive bias and pave the way for scaling up learning from large-scale 3D data. We restrict the number of faces to 800 for better alignment with PolyGen and MeshGPT. However, by equipping MeshXL with modern LLM techniques, such as RoPE and ring attention, we are ready to extend our method to more complicate 3D meshes. Additionally, we will also be able to train larger MeshXL networks.
📝 **W2: Potential redundancy in the mesh representation and intersecting faces.**
💡**A:** **The redundancy in mesh representation paves the way for large-scale pre-training.** We acknowledge that the NeurCF representation does generate shared vertices multiple times. However, this enables us to represent each 3D mesh with **only one coordinate sequence** and design an **end-to-end** pipeline on large scale 3D data.
**Analysis of potential surface artifacts.** The potential occurrence of surface artifacts could be the limitation for our MeshXL, and all existing auto-regressive mesh generation methods (PolyGen and MeshGPT) for now. However, after training directly on large-scale 3D meshes, MeshXL can already generate high-quality 3D mesh data under a high success rate. It is also possible to further improve MeshXL by treating traditional methods as reward models or filters to eliminate the surface artifacts.
📝 **Q1: The resolution of MeshXL.**
💡**A:** To align with MeshGPT, we choose the resolution of $N=128$ in all our experiments. However, our MeshXL adopts an architecture close to large language models, whereas we can easily extend MeshXL to a larger $N$ with a time and space complexity of $O(N)$ by simply learning a $N$-sized coordinate embeddings and an output classifier with $N$ output channels. Comparing to the $O(N^3)$ complexity in traditional hard coding algorithms, it is easier for us to train models with higher resolution. We will also release the models with larger resolution in the future.
📝 **Q2: Inference time analysis**
💡**A:** The inference time of our MeshXL is closely related to the **numbers of generated faces** and the **model sizes**. We perform inference time analysis with **a batch size of one using BFloat16 on a single RTX 3090**. We carry out a an analysis on 1) the average inference time of generating a given number of triangles.
| | MeshXL - 125m | | | MeshXL - 350m | | | MeshXL - 1.3b | |
|-|-|-|-|-|-|-|-|-|
| #faces | time (s) | GPU Mem (GB) | | time (s) | GPU Mem (GB) | | time (s) | GPU Mem (GB) |
| 100 | 6.30 | 1.59 | | 11.30 | 2.98 | | 12.08 | 8.41 |
| 200 | 12.50 | 1.65 | | 22.70 | 3.20 | | 24.03 | 9.17 |
| 400 | 25.21 | 1.85 | | 45.81 | 3.78 | | 48.09 | 11.17 |
| 800 | 49.88 | 2.28 | | 92.19 | 5.74 | | 96.49 | 21.66 |
We also show 2) the average inference time of generating 3D meshes.
| | MeshXL - 125m | MeshXL - 350m | MeshXL - 1.3b |
|-|-|-|-|
|avg. time (s) | 29.49 | 44.65 | 49.43|
📝 **Q3: Fair comparisons and evaluations on larger datasets.**
💡 **A:** We reproduce MeshGPT with `gpt2-medium` (355m) and re-implement MeshXL (marked as MeshXL$^{ShapeNet}$) by first training the model on all ShapeNet categories before fine-tuning it to a specified category. 📊 Please refer to **the global rebuttal** for exact results. With a similar amount of parameters (350m), our method could consistently achieve better generation results with a higher COV score, a lower MMD score, and a closer 1-NNA score to 50%.
Additionally, to study the effectiveness of large-scale pre-training, we conduct evaluations on Objaverse. We show that as the model size grows, MeshXL exhibits better 3D mesh generation quality, with a higher COV score, a lower MMD score, and a closer 1-NNA score to 50%.
| Model | COV $\uparrow$ | MMD $\downarrow$ | 1-NNA | JSD $\downarrow$ | FID $\downarrow$ | KID $\downarrow$ |
|-|-|-|-|-|-|-|
| MeshXL - 125m | 39.76 | 5.21 | 67.34 | 26.03 | 17.32 | 4.48 |
| MeshXL - 350m | 40.79 | 5.20 | 65.68 | 23.71 | 15.14 | 3.33 |
| MeshXL - 1.3b | **42.86** | **4.16** | **61.56** | **20.99** | **12.49** | **2.94** |
Additionally, we have provided additional generation results in the **uploaded PDF file**. We have also open-sourced our code and pre-trained weights to the community.
📝 **Q4: Justification of Neural Coordinate Field.**
💡**A:** We thank the reviewer for pointing out this constructive suggestion.
1. **Justification of Neural Coordinate Field**. Neural coordinate is a representation that treats vertex coordinates as coordinate embeddings. Meanwhile, field should be a function within continuous domain. As we currently learn coordinate embeddings that uniformly spread along each axis, we will try learning interpolation functions (e.g. linear interpolation, gaussian interpolation, or B-spine basis functions) to extend our embeddings to the continuous domain. We will also try extending our method to the continuous domain by replacing the coordinate embeddings with sinusoid function. We will perform detailed experiments in our revision.
2. **Relation to the ordering in PolyGen**. We adopt the same vertex representation as PolyGen, but a **different mesh representation**. Our MeshXL and PolyGen both represent vertices with discrete coordinates. However, **MeshXL represents a 3D mesh only with an ordered sequence of coordinate**, while PolyGen decouples the vertices generation and polygon generation, and adopts two sequences for each mesh, i.e., the vertex sequence and the face index sequence to connect the generated vertices. Therefore, the potential redundancy in our representation in turn supports end-to-end training and better suits learning from large-scale 3D data. | Summary: The paper proposes MeshXL, a mesh generation model based on the Neural Coordinate Fields(NeurCF), which encodes discretized coordinates of mesh vertices into a sequence of tokens.
Then a decoder only transformer is trained to generate meshes unconditionally/conditioned on modality.
The model is trained on multiple datasets for better performance.
In terms of quantitative and qualitative results, the method outperforms other baselines.
Strengths: The method proposes an end-to-end transformer model for mesh generation based on its neural coordinate field.
On the performance side, it produces 3D meshes with better quality.
Weaknesses: The weaknesses of the paper mainly lies in the technical part and the comparison part.
To my knowledge, the main difference between previous discrete mesh generation methods like MeshGPT is that MeshGPT first encodes the mesh with VQVAE and then trains a generation model, while the paper does it in a single stage.
The paper does produce great results but seems miss a reason why single stage can give performance.
In the introduction part (lines 21–28), the paper discusses previous generative methods but does not explicitly point out why they are missing.
I think adding more analysis can help readers understand the paper better.
I am not sure the comparisons between MeshXL and PolyGen/GET3D are fair in Table. 2 since it uses pre-trained MeshXL and fine-tunes it on a specific category. I believe using the same dataset for training is more convincing.
For reference, I think template-based deformation methods and command-based methods can be added to the discussion:
TM-NET: Deep Generative Networks for Textured Meshes
Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation
DeepCAD: A Deep Generative Network for Computer-Aided Design Models
Computer-Aided Design as Language
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Since the MeshGPT is closely related, I think comparisons should be included.
https://github.com/lucidrains/meshgpt-pytorch
2. As mentioned in the weakness part, I am really wondering why end-to-end training can generate better results compared to two-stage method like PolyGen and MeshGPT, especially considering 2D SOTA generation model like the latent diffusion model is two-stage.
3. I am not sure the comparisons between MeshXL and PolyGen/GET3D are fair in Table. 2 since it uses pre-trained MeshXL and fine-tunes it for a specific category. I believe using the same dataset for training is more convincing.
4. Why does the scaling law seem not to apply to the lamp as shown in Table 2?
5. Normals in Fig. 7 seem worse (Columns 1, 3, 4) compared to GET3D.
6. will you release the code and pre-trained models since training MeshXL takes a lot of resources?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitation is discussed, I do not see any issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **W1: Reasons why single stage method works, and analysis on previous works.**
💡 **A:** We thank the reviewer for helping us improve our paper.
1. **The coordinate sequence representation and auto-regressive generation make our method come true**. With a well-defined ordering system, each 3D mesh can be represented by **one unique coordinate sequence**. Additionally, decoder-only transformers have long been proven to excel in auto-regressive sequence generation. Therefore, by establishing a decoder-only transformer trained on the collection of coordinate sequences (3D meshes), our one-stage pipeline is able to generate high-quality 3D meshes.
2. **Analysis of existing works**. Though previous methods have achieved initial success in 3D assets generation, they suffer from certain limitations.
1. To preserve high-frequency information, the point and voxel representations require dense sampling on object surfaces, which inevitably lead to great redundancy when it comes to flat surfaces.
2. The reconstruction-based methods rely heavily on the quality of the generated multi-vew images.
3. The VQVAE-based 3D generation methods require a two-stage training strategy, which poses extra challenges in large-scale training.
📝 **W2: Fair comparison with previous methods.**
💡 **A:** We reproduce MeshGPT with `gpt2-medium` (355m) and re-implement MeshXL (marked as MeshXL$^{ShapeNet}$) by first training the model on all ShapeNet categories before fine-tuning it to a specified category. 📊 Please refer to **the global rebuttal** for exact results. With a similar amount of parameters (350m), our method could achieve better generation results with a higher COV score, a lower MMD score, and a closer 1-NNA score to 50%. After pre-trained on larger datasets, our method can achieve even better generation results.
📝 **W3: Comparison with deformation-based and command-based methods.**
💡 **A:** The training of MeshXL enjoys a better flexibility comparing to both deformation-based or command-based methods.
1. **MeshXL vs. deformation-based methods**. The deformation-based methods build on prior geometry knowledge for great initialization. However, The deformation-based method requires external expert knowledge for good template initialization, which **is difficult to generalize to complex 3D meshes**. However, our MeshXL direct learns from large-scale collection of diverse 3D mesh data.
2. **MeshXL vs. command-based methods**. The command-based methods adopt command sequences to represent the creation process of the visual data. However, collecting commands is much harder than collecting the generated results (3D meshes). Additionally, the command space requires careful designs and is often limited (`arc`, `circle`, and `extrude` in DeepCAD) to create only simple objects, while our MeshXL learns directly on large-scale 3D mesh data with great diversity.
📝 **Q1: Comparison with MeshGPT**
💡 **A:** See weakness 2.
📝 **Q2: Reason why end-to-end training leads to better mesh generation results.**
💡 **A:** The main motivation of our work is to explore an simple and effective mesh representation to support large-scale training. Therefore,
1. **We can hardly say whether two-stage methods are better or worse than end-to-end learning**. However, training a good **vector-quantized** tokenizer is challenging [R1]. In latent diffusion, instead of adopting a VQVAE, the denoising diffusion process learns to generate latent codes predicted by a VAE, which learns the feature distribution of the latent codes.
2. **Our end-to-end design better supports large-scale training**.
1. By representing a 3D mesh with **one unique coordinate sequence**, MeshXL can directly learn from large scale 3D data.
Meanwhile, MeshGPT requires to train a **vector-quantized** representation of vertex feature. Additionally, PolyGen requires to generate a vertex sequence and then connect the generated vertices into polygons. Therefore, learning a good alignment between different modules is challenging and requires careful supervision.
2. Based on the quantitative results on ShapeNet categories in Table 2, MeshXL can produce high-quality 3D meshes with higher COV scores, lower MMD scores, and closer 1-NNA scores to 50% than previous methods.
[R1] Li, Tianhong, et al. "Autoregressive Image Generation without Vector Quantization." arXiv preprint arXiv:2406.11838 (2024).
📝 **Q3: Fair comparison with existing methods**
💡 **A:** See weakness 2.
📝 **Q4: Scaling law does not apply to the `lamp` category.**
💡 **A:** The limited training data leads to overfitting. After pre-processing the ShapeNet dataset, the lamp subset contains **only 565 samples** for training. Therefore, larger models will easily overfit. Instead, we show in Figure 3 of the main paper that when pre-training on extensive 3D data, MeshXL achieves both lower training and validation loss. Additionally, by evaluating on Objaverse, we also notice that as the model size grows, we achieve better generation results.
| Model | COV$\uparrow$ | MMD$\downarrow$ | 1-NNA | JSD$\downarrow$ | FID$\downarrow$ | KID$\downarrow$ |
|-|-|-|-|-|-|-|
|MeshXL - 125m| 39.76 | 5.21 | 67.34 | 26.03 | 17.32 | 4.48 |
|MeshXL - 350m| 40.79 | 5.20 | 65.68 | 23.71 | 15.14 | 3.33 |
|MeshXL - 1.3b| **42.86** | **4.16** | **61.56** | **20.99** | **12.49** | **2.94** |
📝 **Q5: Normal vector comparison with GET3D**
💡 **A:** In Figure 7, we show the normals to compare **the smoothness of object surfaces**. The results show that 3D meshes generated by GET3D have rough surfaces with tens of thousands of triangles, while ours depict the 3D shape with much smoother surfaces and less triangles.
📝 **Q6: Open-sourcing.**
💡 **A:** We have released our code and pre-trained weights to the community. We will also keep updating for additional features.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: Thanks for your great efforts! After reading the response, some major issues have been addressed well, so I still lean towards positive for the submission. I encourage the author to add these clarifications to the main paper. Thanks!
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your recognition of our work! We will incorporate the valuable feedback from all reviewers to enhance our main paper with additional analysis, comparisons, and evaluations. Thank you once again for your time and effort in reviewing our paper! | Summary: This paper proposes a way to use LLM to generate polygon meshes. The key idea is to model mesh generation as the next coordinate prediction, using a strategy similar to that of prior work like PolyGen and MeshGPT. The paper has shown the capability to create a mesh with reasonable quality.
I think the key contribution for the paper is that it shows the potential to use LLM style scaling to achieve better mesh generation, while the weakness is that it's unclear how much technical contribution given most techniques are already been explored in prior works like MeshGPT and PolyGen.
Strengths: The paper shows the potential to use large-scale LLM models to produce better mesh generation. Specifically, with more compute and larger models, one can achieve stable training and better mesh generation quality.
Weaknesses: My main concern about the paper is its potential lack of novelty. The key idea of using an Autoregressive model to generate a mesh has been explored in many prior works, such as PolyGen and MeshGPT. However, the writing and the results show that this paper differs technically from existing methods other than running on larger datasets with larger-scale models. At the same time, it's not clear from the writing how this paper handles many technical challenges of modeling mesh as a sequence in a new way, such as permutation invariance of the faces.
Also, I think the paper lacks a proper evaluation of mesh quality. The introduction is set to claim that the generative mesh has better quality and is potentially more suitable for downstream applications compared to modeling other representations, such as point clouds or voxels. However, few metrics are indicating that the generative mesh is of good quality that the artist can edit. For example, what's the ratio of the generative mesh being watertight? How well is the triangulation? Most generative metrics this paper reports concern the shape this mesh represents but not how well this mesh triangulates.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Question about the ordering. Is the ordering for creating the sequence unique? how does it related/differ from the ordering polygon? I think that in L103 the paper should cite PolyGEN for the partition idea.
* L29-35 I believe this positioning risks overclaiming. The auto-regressive model can also have cumulative error issues. It's not entirely clear why auto-regressive mesh generation does not have "great redundancy when representing flat surfaces".
* L105 - what is "polynomial face"? I think L101-106 requires much more detailed descriptions for readers to find it reproduciable.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the weakness section. Other limitations include restricted context length (i.e. mesh has very long context lengths, as many meshes have trillions of triangles).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **W1: The novelty of MeshXL.**
💡 **A:** The motivation of our MeshXL is to extend the **mesh representations**, **architecture design**, and **training strategy** in existing auto-regressive methods, i.e. PolyGen and MeshGPT, to support **efficient large-scale training on extensive 3D mesh data**. Specifically, our method improves existing methods from the following aspects:
1. **Training strategy.** Both MeshGPT and PolyGen adopt a two-stage training pipeline. This, however, leads to a complicated training procedure less favors large-scale generative pre-training. However, **MeshXL is a fully end-to-end pipeline, which naturally favors large-scale pre-training**.
2. **Mesh representation and architecture**. By modeling 3D mesh data into **one unique coordinate sequence**, our MeshXL only consists of a decoder-only transformer for **next-coordinate prediction**. Meanwhile, PolyGen requires a vertex sequence and a vertex index sequence to turn vertices into polygons. Additionally, MeshGPT adopts a vector-quantized approach by learning a fixed-size codebook to represent the vertex feature before learning to generate token sequences auto-regressively. Though we have introduced a bit more redundancy in our mesh representation, our method enjoys a much simpler architecture design and better supports large-scale training.
3. **Data Processing**. The data pre-processing in our MeshXL is also much simpler as we are only required to permute 3D faces into a coordinate sequence. Our simplified data pre-processing increases the total throughput to further support large-scale training. Meanwhile, MeshGPT requires to build a face graph with respect to face connectivity within the 3D meshes to extract face and vertices features.
📝 **W2: Additional metrics for mesh quality assessment.**
💡 **A:** We will add more metrics for better mesh quality assessment in the revision.
1. **How well is the triangulation**. Following suggestions from reviewer XQPQ, we evaluate the aspect ratio, face area, and number of faces for a better evaluation in the following table. Though the meshes generated by our MeshXL have a higher average aspect ratio, we achieve a smaller variance with much less 3D faces. This indicates the **stability** of our generation ability and the **efficiency** of the direct mesh representation. Since we train our MeshXLs only on triangular meshes, long-thin triangles inevitably exist in our training data. In future works, we will co-train our MeshXLs on triangular meshes, 3D quads, and even hybrid representations to reduce the existence of long thin triangles for better generation quality.
| Method | Aspect Ratio | | | Face Area | | | Number of Faces| |
|-|-|-|-|-|-|-|-|-|
| | mean | std. | | mean | std. | | mean | std. |
| GET3D | 6.27 | 116.03 | | 0.000 | 0.000 | | 27251.80 | 11535.135 |
| MeshXL - 125m | 10.47 | 16.88 | | 0.031 | 0.096 | | 327.34 | 174.53 |
| MeshXL - 350m | 10.25 | 16.09 | | 0.032 | 0.099 | | 342.24 | 193.97 |
| MeshXL - 1.3b | 10.23 | 15.91 | | 0.034 | 0.102 | | 320.36 | 195.43 |
2. **Watertight meshes**. A watertight mesh does not have any boundary edges. Therefore, it is debatabe whether we should generate watertight meshes in 3D assets generation. Currently, watertightness is mainly required to perform physical simulation or turn 3D meshes into implicit field and distant functions. However, in 3D assets generation, many common 3D shapes including cloth, terrain, and leaves are not watertight. Furthermore, it is also challenging and inaccurate to specify the interior of a 3D mesh for an reliable evaluation. Therefore, our method mainly focus on establishing a more direct representation for 3D mesh generation.
📝 **Q1: Question about the ordering.**
💡 **A:** We will cite PolyGen in Line 103, and clarify the relation between our mesh representations and PolyGen's in section 3 .
1. We adopt the same ordering system as PolyGen and MeshGPT, which first permutes the vertices within each face cyclically based their coordinates in z-y-x order (from lower to higher), then permutes the faces based on the permuted coordinates from lower to higher. Since there are no identical polygons in a 3D mesh, the ordering strategy will create **one unique sequence** for each 3D mesh.
2. **Relation to the ordering in PolyGen**. We adopt the same vertex representation as PolyGen, but a **different mesh representation**. Our MeshXL and PolyGen both represent vertices with discrete coordinates. However, **MeshXL represents a 3D mesh only with an ordered coordinate sequence**, while PolyGen decouples the vertices and polygons in 3D meshes, and represents each 3D mesh with two sequences, i.e., the vertex sequence and an index sequence to connect the generated vertices. Therefore, our coordinate sequence representation enables us to train an end-to-end model directly on large-scale 3D data.
📝 **Q2: Clarifying the claims.**
💡 **A:** We will clarify our claims and motivations in our revision.
1. We will modify the claim into "*comparing to the indirect way of mesh generation, our end-to-end pipeline better supports large-scale pre-training, especially considering the training and data processing efficiency*". We will also do further study to explore better auto-regressive pipelines.
2. In our main paper, our intended idea is that the 3D mesh representation has great flexibility that can represent flat surfaces with much less data (i.e. two triangles for a rectangle surface vs lots of points and voxels) and preserve details with more faces in curved surfaces. We will clarify this in our revision.
📝 **Q3: The definition of "polynomial face".**
💡 **A:** In our paper, a k-sided polynomial face is a polygon with k vertices and k lines. For example, triangles and quadrilaterals are special cases with three and four sides, respectively. We will clarify this in our revision. We have also open-sourced our code to help readers better understand our method.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the detailed response. After reading other reviews together with the rebuttal, I lean to believe that the paper does carry some technical novelties to make mesh generation more scalable - some of which help simplify the pipeline, which is under-appreciated by the community. With that being said, I am willing to change my score toward acceptance.
I strongly encourage the authors to revise the paper to bring out the technical contribution needed to simplify the pipeline and make mesh generation more scalable. I would appreciate more detailed discussion on the difference between MeshXL's ordering method and PolyGen's.
The evaluation on mesh quality is still unconvincing since the presented metrics can be biased by the fact that the generated meshes are outputted by a model which only sees short meshes. This means that the model can sacrifice other aspects of the mesh qualities, such as being watertight or without self-intersecting faces, just to generate a subset of faces well. I believe this is still the most significant limitation of the current method and would appreciate it if the authors make appropriate acknowledgment of that.
---
Rebuttal 2:
Comment: We sincerely appreciate your recognition of our work and the valuable feedback to help us improve our paper.
In our revision, we will highlight that our work **paves the way for scaling up training on large-scale 3D mesh data**. Our **mesh representation** turns a 3D mesh into one unique coordinate sequence, which enables us to simplify our **architecture design** into a decoder-only transformer model, facilitating an **end-to-end training pipeline** that does not require sophisticated data pre-processing, careful model design, or complex training strategy and better suits large-scale 3D mesh data.
To further improve our paper, we will include a detailed discussion between MeshXL's mesh representation and that of PolyGen. Additionally, we will also incorporate objective evaluations on the triangulations for mesh quality assessment.
We acknowledge that the potential generation of certain artifacts is a limitation for now. To alleviate the potential occurrence of artifacts, we will keep exploring methods to integrate domain knowledge as filters or reward models. We will also work on co-training MeshXL on triangle meshes, 3D quads, and even hybrid representations to reduce the occurrence of long thin triangles as also mentioned by reviewer XQPQ.
Once again, we thank you very much for your recognition and valuable suggestions from all reviewers to help us improve our work.
Title: Appreciate the recognition | Summary: This paper addresses the challenge of generating high-fidelity 3D meshes by introducing Neural Coordinate Field (NeurCF), an effective representation for large-scale sequential mesh modeling. The authors present MeshXL, a family of generative pre-trained auto-regressive models, which applies modern large language model techniques to 3D mesh generation. Extensive experiments demonstrate that MeshXL produces high-quality 3D meshes and outperforms existing methods on various tasks. Key contributions include validating NeurCF as a viable representation for 3D meshes, presenting MeshXL as robust base models for conditioned 3D mesh generation, and showcasing MeshXL's superior performance in generating detailed 3D meshes compatible with current texturing methods.
Strengths: 1. This paper trains a foundational mesh generation model using extensive datasets from ShapeNet, 3D-FUTURE, Objaverse, and Objaverse-XL, with the addition of data augmentation.
2. It proposes a novel 3D mesh representation that can be encoded as a token sequence, effectively leveraging the capabilities of autoregressive large language model approaches.
3. The paper establishes a fair evaluation metric, considering both the generation score (as shown in Table 2) and the 3D mesh quality from a graphics perspective (as shown in Table 3).
Weaknesses: 1. This method does not appear to incorporate domain knowledge from traditional remeshing techniques to ensure correct connectivity between different components, avoid self-intersections, and prevent flipping.
2. In the user study, more objective metrics for measuring mesh quality should be considered. For instance, in downstream tasks like ray tracing, long thin triangles should be avoided, and aspect ratio can be used to measure how thin these triangles are.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How does this method address common mesh surface artifacts in modeling, such as ensuring correct connectivity between different components, avoiding self-intersections, and preventing flipping?
2. In Section 4, we generate triangles within “<tri> · · · </tri>” and quadrilaterals within “<quad> · · · </quad>”. However, what is the form of the output in the results presented in the paper? Should these sequences of triangles and quadrilaterals be generated separately or can they be combined in the final meshing result?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Please check weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 📝 **W1: The absence of domain knowledge to prevent potential artifacts**.
💡 **A:** In our work, we put emphasis on exploring a sequential way to model 3D meshes that better suits large-scale generative training on extensive 3D data. Therefore, the potential generation of surface artifacts is currently the limitation for our method, as well as PolyGen and MeshGPT. However, our MeshXL has the potential to incorporate with traditional methods by treating the domain knowledge as **filters** or even **reward models** to eliminate certain artifacts.
📝 **W2: Accessing meshes with additional objective metrics**.
💡 **A:** We will add more objective metrics in our main paper. We calculate the face area and the aspect ratio with respect to the definition: $\text{Aspect Ratio} = \frac{\text{longest edge}}{\text{shortest altitude}}$. Since PolyGen generates polygon meshes rather than triangle meshes, we could not calculate the aspect ratio for PolyGen. From the below table, though GET3D achieves a lower average aspect ratio, it suffers from a higher variance with tens of thousands of faces. Meanwhile, MeshXL achieves a much stable aspect ratio with a larger average face area, indicating that the our MeshXL has the stability to generate high-quality 3D meshes.
| Method | Aspect Ratio | | | Face Area | | | Number of Faces| |
|-|-|-|-|-|-|-|-|-|
| | mean | std. | | mean | std. | | mean | std. |
| GET3D | 6.27 | 116.03 | | 0.000 | 0.000 | | 27251.80 | 11535.135 |
| MeshXL - 125m | 10.47 | 16.88 | | 0.031 | 0.096 | | 327.34 | 174.53 |
| MeshXL - 350m | 10.25 | 16.09 | | 0.032 | 0.099 | | 342.24 | 193.97 |
| MeshXL - 1.3b | 10.23 | 15.91 | | 0.034 | 0.102 | | 320.36 | 195.43 |
**How to alleviate long thin triangles**. Currently, we train our MeshXL on 3D triangular mesh data. Therefore, long thin triangles inevitably exist in our ground truth training data, for example, the legs of tables or chairs. To alleviate the generation of long thin triangles, we will train our method on 3D quads in the future, which has been proved to be an efficient and common mesh representation in the 3D design industry, and could avoid the generation of long thin triangles.
📝 **Q1: Addressing potential artifacts in the generated meshes.**
💡 **A:** See Weakness 1.
📝 **Q2: The sequential mesh representations.**
💡 **A:** We will clarify the sequential mesh representations in the revision.
1. **We train only on triangular meshes in our paper**. In order to accelerate the data processing and training procedure, we choose an easier setting by turning all 3D meshes into triangular meshes to validate that the **next-coordinate generation** is capable of high-quality 3D mesh generation. Thus, the output of our method is always represented with $\text{<tri>} (x,y,z),(x,y,z),(x,y,z); ... \text{</tri>}$.
2. **We can easily extend MeshXL to hybrid representations**. In our paper, we present “<tri> · · · </tri>” and “<quad> · · · </quad>” as a potential that our method can be extended to both triangular meshes, 3D quads, and even the hybrid representation. We will further extend our work on the hybrid representation to alleviate the existence of long thin triangles as mentioned in weakness 2.
3. If there are both triangles and quadrilaterals in a 3D mesh, our method also follows the ordering strategy introduced in line 111 - 114, which first permutes the vertices within each face cyclically based their coordinates (z-y-x order, from lower to higher), and then order the faces based on the permuted coordinates from lower to higher. Therefore, **a 3D mesh with both triangles and quadrilaterals can also be represented by one unique sequence**. Thus, even with the hybrid representations, our MeshXL can generate these 3D meshes in one sequence. | Rebuttal 1:
Rebuttal: We thank all reviewers for approval: 1. a **novel and elegant 3D mesh representation** (R1, R4) that 2) **effectively leverages LLM approaches** (R1, R2) for **end-to-end large-scale** training (R1, R3), and 3) a **stable training and better mesh generation quality by scaling up** (R2, R3) supported by 4) a **fair and convincing evaluation** (R1, R4) covering both generation score and mesh quality from a graphics perspective (R1). (R1 - Reviewer XQPQ, R2 - Reviewer e1fg, R3 - Reviewer CF59, R4 - Reviewer AHoX)
We also thank all the reviewers for their valuable suggestions to help us improve our paper. We will address your concerns and revise the paper carefully. We have provided **additional visualization results in the attached pdf file**. Please find our item-to-item responses to your concerns below.
**Motivation and Novelty**. To better support large-scale training, we verify that we can represent a 3D mesh into **one unique coordinate sequence** based on a well-defined ordering strategy. With this simple mesh representation, our MeshXL only requires a single decoder-only transformer for sequence modeling. Comparing to prior two-stage works, **our MeshXL is an end-to-end single-stage pipeline** does not require sophisticated data pre-processing, careful model design, or complex training strategy. Therefore, our MeshXL better suits learning from large-scale 3D data.
**Additional baseline and evaluations**. In the following table, we re-produce the MeshGPT with `gpt2-medium` (355m) using the third-party implementation. We also follow the setting from previous works by pre-training MeshXL (350m) on ShapeNet before fine-tuning to specified categories, which is marked as (MeshXL$^{ShapeNet}$) in the following table. One can see that our method consistently achieves better results than all previous methods. Comparing to MeshGPT with a similar amount of parameters (~350m), our method achieves a higher COV score, a lower MMD score, and a closer 1-NNA score to 50%.
| Category | Method | COV$\uparrow$ | MMD$\downarrow$ | 1-NNA | JSD$\downarrow$ | FID$\downarrow$ | KID$\downarrow$ |
|-|-|-|-|-|-|-|-|
| **Chair** | PolyGen | 7.79 | 16.00 | 99.16 | 228.80 | 63.49 | 43.73 |
| | GET3D | 11.70 | 15.92 | 99.75 | 155.25 | 67.84 | 42.10 |
| | MeshGPT | 42.00 | 4.75 | 69.50 | 55.16 | 39.52 | 8.97 |
| | MeshXL - 125m | 50.80 | **3.11** | 56.55 | 9.69 | 28.15 | 1.48 |
| | MeshXL$^{ShapeNet}$ - 350m | 47.94 | 3.26 | 57.54 | 13.42 | 29.14 | 1.79 |
| | MeshXL - 350m | 50.80 | 3.17 | **55.80** | 9.66 | 28.29 | **1.39** |
| | MeshXL - 1.3b | **51.60** | 3.23 | **55.80** | **9.48** | **9.12** | 1.84 |
| **Table** | PolyGen | 44.00 | 3.36 | 67.20 | 25.06 | 54.08 | 14.96 |
| | GET3D | 16.80 | 10.39 | 91.90 | 226.97 | 67.65 | 34.62 |
| | MeshGPT | 34.30 | 6.51 | 75.05 | 92.88 | 53.75 | 7.75 |
| | MeshXL - 125m | 51.21 | 2.96 | 57.96 | **12.82** | 42.55 | **0.92** |
| | MeshXL$^{ShapeNet}$ - 350m | 49.75 | **2.90** | **54.72** | 13.75 | 44.92 | 1.80 |
| | MeshXL - 350m | 49.70 | 3.07 | 56.10 | 13.64 | 43.43 | 1.27 |
| | MeshXL - 1.3b | **52.12** | 2.92 | 56.80 | 14.93 | **22.29** | 2.03 |
| **Bench** | PolyGen | 31.15 | 4.01 | 83.23 | 55.25 | 70.53 | 12.10 |
| | MeshGPT | 34.92 | 2.22 | 68.65 | 57.32 | 52.47 | 6.49 |
| | MeshXL - 125m | 54.37 | 1.65 | 43.75 | 16.43 | **35.31** | **0.82** |
| | MeshXL$^{ShapeNet}$ - 350m | 55.75 | **1.46** | **44.64** | **10.66** | 36.81 | 1.48 |
| | MeshXL - 350m | 53.37 | 1.65 | 42.96 | 15.41 | 36.35 | 0.96 |
| | MeshXL - 1.3b | **56.55** | 1.62 | 39.78 | 15.51 | 35.50 | 1.60 |
| **Lamp** | PolyGen | 35.04 | 7.87 | 75.49 | 96.57 | 65.15 | 12.78 |
| | MeshGPT | 41.59 | 4.92 | 61.59 | 61.82 | 47.19 | 5.19 |
| | MeshXL - 125m | **55.86** | 5.06 | 48.24 | 43.41 | 34.61 | **0.84** |
| | MeshXL$^{ShapeNet}$ - 350m | 52.74 | **3.39** | 41.15 | **25.03** | 31.18 | 1.06 |
| | MeshXL - 350m | 53.52 | 4.18 | **49.41** | 34.87 | **25.94** | 1.92 |
| | MeshXL - 1.3b | 51.95 | 4.89 | 47.27 | 41.89 | 31.66 | 0.99 |
Pdf: /pdf/f426ce44ab1f63d787fe91a1b83de908532746ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Instruction Tuning With Loss Over Instructions | Accept (poster) | Summary: This paper proposes a new method called Instruction Modelling (IM) for training language models, which applies a loss function to both the instruction and output parts of training data, rather than just the output. Through experiments on diverse benchmarks, the authors show that IM can improve model performance on both NLP tasks and open-ended generation benchmarks compared to standard Instruction Tuning (IT). The effectiveness of IM is influenced by two key factors: the ratio between instruction length and output length in the training data, and the number of training examples. IM is particularly beneficial for datasets with long instructions paired with brief outputs, or when using a small amount of training data. The authors hypothesize that IM's improvements stem from reducing overfitting during instruction tuning.
Strengths: - The paper revisits the fundamental approach to instruction tuning, which typically involves calculating loss only on the output portion of the data. By proposing Instruction Modelling (IM), which applies loss to both the instruction and output parts, the authors challenge this standard practice. This fresh perspective on a widely-used technique is a key strength of the paper.
- The authors conduct extensive experiments across many diverse benchmarks, demonstrating the broad applicability and effectiveness of their proposed method.
- The paper identifies and analyzes two crucial factors influencing IM's effectiveness: the ratio between instruction length and output length in training data, and the number of training examples. This analysis provides valuable insights for practitioners on when and how to best apply the IM approach, particularly in low-resource scenarios.
Weaknesses: - Similar ideas and conclusions have been proposed by previous work [1].
- While the paper presents empirical results showing the effectiveness of Instruction Modelling (IM), it lacks a strong foundation explaining why applying loss to instructions works. The authors hypothesize that IM reduces overfitting, but a more rigorous theoretical analysis could provide deeper insights into the mechanism behind IM's success.
- The experiments primarily use LLaMA-2 and OPT models. While these are significant models, the paper doesn't explore how IM performs across a wider range of model architectures or sizes, e.g., whether the conclusion also holds for a 34B or a 70B model.
[1] Instruction Fine-Tuning: Does Prompt Loss Matter? https://arxiv.org/pdf/2401.13586v2
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are there any possible experiments that you can do to further explain why applying loss to instruction works?
- Do you think when we have a significantly larger size of instruction tuning data, the conclusion still holds? For example, we see the recent release of Llama-3 models, which adopted 10+ million instructions. And which factor is more important: the ratio between output and the instruction length or the number of instruction tuning samples?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. But the authors do not have a mandatory paper checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We are grateful for the reviewer's positive feedback: the fresh perspective, the extensive experiments, and the valuable analysis of crucial factors influencing IM's effectiveness. We would like to address the reviewer's valuable feedback as follows:
> Limitations: But the authors do not have a mandatory paper checklist.
We have the checklist from the Page 23 to 29 in our main PDF file. We recognize that it may not have been easily noticeable, and we will ensure that in the revised version.
> Similar ideas and conclusions have been proposed by previous work \[1\].
We appreciate the reviewer bringing attention to \[1\]. As noted in our related work section, we have already discussed this paper. While \[1\] introduces a hyperparameter to control the degree of instruction loss during training and investigates its impact, our work proposes a broader guideline. Instead of introducing new hyperparameters, we focus on when and how to include loss over instruction effectively and explore the underlying mechanisms. Our approach provides a more practical and widely applicable framework.
> While the paper presents empirical results showing the effectiveness of Instruction Modelling (IM), it lacks a strong foundation explaining why applying loss to instructions works. The authors hypothesize that IM reduces overfitting, but a more rigorous theoretical analysis could provide deeper insights into the mechanism behind IM's success.
We appreciate the reviewer's feedback. While we acknowledge the importance of rigorous theoretical analysis, it falls beyond the scope of our current work. Our paper aims to provide general, empirical insights for instruction tuning. Our key message is that the efficacy of masking user prompts during instruction tuning is an empirical question. We find that including prompt loss during training can be particularly advantageous when the number of instruction tuning data is limited and completions are short.
We recognise the challenges in rigorously proving improvements, especially given the varied knowledge carried by different pre-training models and instruction tuning datasets. A comprehensive theoretical analysis would need to account for these complex, interacting factors. Our empirical findings provide insights for future theoretical investigations into instruction tuning.
> The experiments primarily use LLaMA-2 and OPT models. While these are significant models, the paper doesn't explore how IM performs across a wider range of model architectures or sizes.
We conducted additional experiments on additional models including Phi and Gemma, which show qualitatively similar results (see Table 2 in the rebuttal pdf). However, we were not able to finetune 60-70b parameter due to computational restrictions. We will add a note on this point to the limitations section.
> Are there any possible experiments that you can do to further explain why applying loss to instruction works?
We appreciate the reviewer's insightful question. While our current study provides empirical evidence for reducing overfitting through loss and BLEU analysis, we agree that further analyses could offer deeper insights into the underlying mechanisms. Several factors may affect the model's effectiveness, including assessing whether IM improves the factual correctness of responses or reduces toxic content in model outputs, as well as testing if IM enhances robustness to out-of-distribution (OOD) instructions.
To address this aspect, we conducted additional experiments comparing high-quality datasets \[2, 3\] with randomly selected datasets of the same size from the same source datasets. As shown in Table 1 of our attached PDF file, our results indicate that our approach generally performs better even when trained on randomly selected datasets. This finding suggests that our approach is robust across various data qualities and may be particularly beneficial when working with diverse, non-curated datasets. We will include these results in our revised paper.
> Do you think when we have a significantly larger size of instruction tuning data, the conclusion still holds? For example, we see the recent release of Llama-3 models, which adopted 10+ million instructions. And which factor is more important: the ratio between output and the instruction length or the number of instruction tuning samples?
We thank the reviewer for raising these important questions. We would like to clarify that our approach is not intended to replace instruction tuning in all scenarios; rather, we propose that the decision to mask user prompts during this process should be empirically driven. In scenarios with limited instruction tuning data and short completions, incorporating prompt loss during training can be particularly beneficial.
Regarding the relative importance of factors, we view the ratio between output and instruction length and the number of instruction tuning samples as fundamentally equivalent, both representing constraints on instruction tuning resources. Our research demonstrates that exposing the model to additional loss signals through instruction modelling—specifically by including loss on the prompt during training—can lead to more robust and effective models. This approach maximizes the utility of limited resources, potentially enhancing model performance across various tasks.
### Reference:
\[1\] Instruction Fine-Tuning: Does Prompt Loss Matter? Arxiv 2024\.
\[2\] LESS: Selecting Influential Data for Targeted Instruction Tuning, ICML 2024
\[3\] AlpaGasus: Training A Better Alpaca with Fewer Data. ICLR 2024\. | Summary: The propose that when updating models using instruction tuning, the models should also be updated based on loss on the instruction itself.
This is a simple change that is un-intuitive, so the successful results are impressive.
Strengths: They include experiments from many different datasets and look at multiple different LLMs. And there results generally hold across them.
Weaknesses: It is unclear what the paper considers an "instruction", there should be some examples of it. At one point it says that "static parts" of the instruction, like a "<user>:" token, are masked from the lost. Does that mean the rest of the instruction is all examples? An instruction like "Tell me the sentiment of this text" is going to be static across examples.
When looking at table 1, although their IM methods wins in terms of average performance, there are many datasets where it performs worse. This is glossed over in their prose.
Their "Loss analysis" experiments are not convincing as they don't fully capture the changes to the model that occur in each setting. By only checking the loss on the continuations, they don't capture how their IM model might be more overfit to the instructions themselves. For example, if you looked that the train loss for the whole Instruction + continuation for each models, the IM model would mostly likely be far lower than the loss for the instruction tuned model. It makes sense that the IM model is going to be worse on the continuations after the same amount of training as updates to get better as the instruction itself will cause conflicts. Discarding the models performance on the Instructions creates a self-fullfilling prophecy of the loss being higher when only the continuations are considered.
Technical Quality: 3
Clarity: 2
Questions for Authors: What is considered an "instruction" when training? The text mentions that "static" tokens like "<user>:" are masked out, but many instructions are static, i.e. "Tell me if the second sentence entails the first". If instructions are non-static are they mostly example like in In-Context-Learning?
They mention that IM is most effective in settings where the instructions are long. Is it possible that most of this gain is for essentially continuing pre-training on in-domain data a la https://arxiv.org/abs/2004.10964
Similarly, do you think the success in the SAH settings is possibly an artifact of getting a model that is just trained on a lot more data? How many more tokens does an IM model see than in IT model in that setting? Is there a way to hold the number of tokens seen constant? For example by training the IM model on even fewer examples to show that the gain really is from the training on the instruction?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We are pleased to receive positive feedback on the extensive experiments and the consistency of our results across diverse settings. We would like to address the reviewer's valuable feedback as follows:
> It is unclear what the paper considers an "instruction", there should be some examples of it. At one point it says that "static parts" of the instruction, like a "<user>:" token, are masked from the lost. Does that mean the rest of the instruction is all examples? An instruction like "Tell me the sentiment of this text" is going to be static across examples.
In our work, the instructions contain the following parts:
- Static templates. These are pre-defined templates randomly selected (please refer to Lines 4-38 in our code repository file `src/instruction_encode_templates.py` (Please find the link in our abstract).
- Dynamic instructions from the user. These are the actual user-provided instructions contained in the “instruction” key of the examples.
- Formatting tokens. We consistently mask out formatting tokens like "\<user\>:" to concentrate on the content.
```json
{
"instruction": "In 1994 , the Polish ambassador to South Africa , Mr Durrant , presented the `` Warsaw Cross of Insurrection to SCieniuch 's widow .\nIn 1994 , the Polish ambassador to South Africa , Mr Scieniuch , presented the `` Warsaw Cross of Insurrection to Durrant 's widow .\n\nAre these two sentences paraphrases of each other?\nOptions are:\n 1). no.\n 2). yes.\n",
"completion": "1)",
},
{
"instruction": "Give three tips for staying healthy.\n\n",
"completion": "1.Eat a balanced diet and make sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule.",
}
```
We will add these examples in the revised paper.
> When looking at table 1, although their IM methods wins in terms of average performance, there are many datasets where it performs worse. This is glossed over in their prose.
We will modify the text to acknowledge this point. We also respectfully present a different perspective: it is not trivial to discern a clear trend or pattern of how different methods affect model performance through a single or small number of tasks. This is precisely why our work includes 23 NLP benchmarks from 6 different categories and 3 LLM-based evaluations. Our analysis aims to provide a holistic view of how our approach and baselines impact downstream task performance. The observed performance variations across datasets offer valuable insights for future research.
> Their "Loss analysis" experiments are not convincing as they don't fully capture the changes to the model that occur in each setting.
Our loss analysis focuses on the loss of continuations, as including the loss over the instruction part for our approach could lead to an unfair comparison. In our paper, we also conducted a BLEU Score analysis to investigate the potential overfitting issue. This analysis compares the overlap between model outputs and the ground truth outputs in training examples. Our results show that our approach produces outputs with less overlap with the ground truth outputs in training examples, indicating less overfitting compared to the baseline models.
> They mention that IM is most effective in settings where the instructions are long. Is it possible that most of this gain is for essentially continuing pre-training on in-domain data a la https://arxiv.org/abs/2004.10964
We thank the reviewer for this insightful question. Our approach differs from continued pre-training in several aspects:
- Our approach exposes the model to more loss signals under limited resources, whereas continued pre-training leverages **large-scale** unlabeled text to learn in-domain knowledge.
- Our method does not require an additional source of data and can be applied almost for free within existing finetuning pipelines, and we can therefore expect to see wide use.
- Continued pre-training typically targets performance improvement in specific domains or tasks. In contrast, our approach demonstrates effectiveness across a broad spectrum of 23 NLP benchmarks and 3 LLM-based evaluations, indicating greater generalizability.
- Our method specifically targets the relationship between instructions and their corresponding outputs, a focus not present in traditional continued pre-training approaches.
We will clarify these differences in our revised version.
> Similarly, do you think the success in the SAH settings is possibly an artifact of getting a model that is just trained on a lot more data? How many more tokens does an IM model see than in IT model in that setting? Is there a way to hold the number of tokens seen constant? For example by training the IM model on even fewer examples to show that the gain really is from the training on the instruction?
We would like to argue that the increased exposure to tokens in our approach is not a limitation, but rather a key advantage of our method. This aligns with observations in other areas of LLM research. For instance, Yi Tay discussed in his blog [1] that one potential reason decoder models outperform encoder-only models is due to greater "loss exposure" in the next token prediction objective compared to the denoising objective.
Similarly, in our approach, exposing the model to more loss signals through instruction modelling is beneficial. This is particularly useful in scenarios with limited instruction tuning resources. By including loss on the prompt during training, we leverage more of the available data, potentially leading to more robust and effective models.
### Reference:
[1] https://www.yitay.net/blog/model-architecture-blogpost-encoders-prefixlm-denoising | Summary: This paper proposes Instruction Modeling (IM), which trains LMs by applying a loss function to the instruction and prompt part rather than solely to the output part. The method is found to be effective on NLP tasks and open-ended generation benchmarks. This paper found two key factors that influence the effectiveness of the approach: (1) the ratio between instruction and output lengths, and (2) the quantity of training data. There are also additional analysis that shows IM can reduce overfitting.
Strengths: Generally the paper is well written and covers a wide range of experiments. It forms a comprehensive study on whether insturction tuning should calculate loss over the instruction part. Actually, it's a little surprise to me that calculating loss over the instruction part can improve the performance of the model, and the reasoning behind this explained by the paper is insightful and interesting. Overall I think:
1. The finding is interesting and novel. This is in contrast to the common practice of calculating loss over the output part only. I think it's going to have broad impact on how people finetune LMs in the future if all the claims are true.
2. The experiments and evaluations are comprehensive. THe paper covers a wide range of instruction datasets and quite a few widely used benchmarks.
3. The paper has a lot of details, in both the main body and the appendix. The appendix is very detailed and informative, which is good for reproducibility.
Weaknesses: A part I feel that's missing in the paper is analysis on the **quality of the instruction part**. One reason that people didn't calculate loss over the instruction part is that the instruction part is usually noisy and not well-structured, e.g, ShareGPT has a lot of user shared low-quality instructions. The paper should analyze how the quality of the instruction part affects the performance of the model. Intuitively, if the model is also learning from the noisy instruction part, it's possible that the model will learn undesirable patterns from the instruction part, but I don't see this being discussed in the paper.
Moreover, the number of LMs being finetuned and tested is relatively small. It will be better if more families of LMs and larger models (eg, 60-70B params) are tested.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Do you have any analysis on the quality of the instruction part? How does the quality of the instruction part affect the performance of the model? See "Weaknesses" part.
2. I might have missed this one in the paper, but how do you pack multiple samples during finetuning? Do you concatenate samples to form a fixed length and then trucate them? Or do you add padding tokens?
3. Do you have any scenarios where the IM is significantly worse than the baseline? If so, can you provide some examples? I want to know under what conditions IM is not effective and should be avoided.
4. Do you think IM can be used as a drop-in replacement for the current finetuning process? Or do you think it should be used in conjunction with the current finetuning process?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are being discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the effort and time of the reviewer (b1op). We are grateful for the positive feedback on our paper's comprehensiveness, novelty, potential impact, extensive experiments, and detailed presentation. We are particularly pleased that our well-written reasoning and thorough appendix were recognised. We would like to address the reviewer's valuable feedback as follows:
> A part I feel that's missing in the paper is analysis on the quality of the instruction part.
We thank the reviewer for this insightful comment. In response to this feedback, we have conducted additional experiments to address this aspect of our work.
We acknowledge that it is not trivial to define what constitutes high-quality data, and it is particularly challenging to judge the quality of instructions objectively. Given these difficulties, we have chosen to follow Alpagasus \[1\] in ICLR 2024 and LESS \[2\] in ICML 2024 where the overall quality of instructions and completions has been evaluated.
Specifically, we compare high-quality data subsets identified in previous work \[1,2\] with randomly selected data subsets of the same size from the same underlying source datasets. We utilised four datasets:
- Less Tydiqa (13 533 examples), selected from the source datasets, Flan V2 and Dolly
- Alpagasus Dolly 3k (2 996 examples), selected from the source dataset Dolly
- Alpagasus Dolly 9k (9 229 examples), selected from the source dataset Dolly
- Alpagasus Alpaca 5k (5 305 examples), selected from the source dataset Alpaca
As shown in Table 1 in our attached PDF file, our results show that our Instruction Modeling (IM) approach performs better than baselines on both high-quality and randomly selected datasets. This finding suggests that IM is robust across various data qualities. We will include these results in our revised paper.
> Moreover, the number of LMs being finetuned and tested is relatively small. It will be better if more families of LMs and larger models (eg, 60-70B params) are tested.
We conducted additional experiments on additional model families including Phi and Gemma, which show qualitatively similar results (see Table 2 in the rebuttal pdf). However, we were not able to finetune 60-70b parameter due to computational restrictions. We will add a note on this point to the limitations section.
> I might have missed this one in the paper, but how do you pack multiple samples during finetuning? Do you concatenate samples to form a fixed length and then truncate them? Or do you add padding tokens?
We appreciate the opportunity to clarify this point. We follow the standard training paradigm of instruction tuning, where we do not pack training examples. Instead, we add the padding token to the maximum sequence length for each training example. We will make this clearer in our revised paper.
> Do you have any scenarios where the IM is significantly worse than the baseline? If so, can you provide some examples? I want to know under what conditions IM is not effective and should be avoided.
We appreciate the reviewer's interest in the potential limitations of our method. In our extensive experiments, we have not observed scenarios where IM is significantly worse than the baseline. We have observed that when two specific scenarios are not met—namely, (1) limited instruction tuning data and (2) short completions—it generally does not significantly impact performance whether the prompt loss is included during training, leaving the performance more empirical. We acknowledge that there might be specific edge cases or scenarios not covered in our current experiments. For example, if instruction parts in training examples include more toxic or harmful content, understanding how our approach affects model performance in such situations remains an area for future research.
> Do you think IM can be used as a drop-in replacement for the current finetuning process? Or do you think it should be used in conjunction with the current finetuning process?
We thank the reviewer for this question as it allows us to clarify our position. We are not proposing IM as a complete replacement for current fine-tuning processes. Rather, our key message is that whether masking user prompts during instruction tuning could be more empirical. We find that the effectiveness of masking user prompts can vary depending on factors such as the amount of instruction tuning data available and the length of completions. In some cases, including the prompt loss during training might be advantageous. We appreciate the opportunity to clarify this point and will ensure it is clearly stated in our revised paper.
### Reference:
\[1\] AlpaGasus: Training A Better Alpaca with Fewer Data. ICLR 2024\.
\[2\] LESS: Selecting Influential Data for Targeted Instruction Tuning, ICML 2024 | Summary: In this work, authors propose to use instruction modeling (using loss over the full instruction-output pair) instead of just instruction tuning (using loss over the output given the instruction) as a method for supervised finetuning on LLMs. The authors demonstrate consistent gains over multiple benchmarks using this simple technique and hypothesize that the gains occur due to reduced overfitting to instruction tuning dataset.
Strengths: 1. The proposed method is quite simple and scalable as such would be of great interest to practitioners in the community.
2. The authors try to characterize the gains and attribute it to the extra signal obtained in the case of short outputs as well as to reduced overfitting of the approach. Both characterizations seem intuitive in explaining the results obtained. Further, the authors show that proposed method is complementary to NEFTUNE, another recent method which results in consistent empirical gains.
Weaknesses: 1. Limited Models Explored: While the authors do a good job of experimenting with multiple instruction datasets, the experiments are only done with LLama-2 (7B and 13B) and OPT-7B models. This forgoes a plethora of open source models, experiments on which would have greatly strengthen the paper. Given that these LLMs are all from the Meta family of LLMs -- this raises some suspicion as to the generality of the proposed technique when different architectures or modeling assumptions are made.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Do you have any results on other LLM families: Phi, MPT or Gemma?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the effort and time of the reviewer (z7Cy). We are thrilled to receive positive feedback on the simplicity and scalability of our method, and the recognition of our efforts to characterise the gains. The reviewer's acknowledgement of our intuitive explanations for the results and the complementarity of our method with recent approaches like NEFTUNE is particularly gratifying. We would like to address the reviewer's valuable feedback as follows:
> This forgoes a plethora of open source models, experiments on which would have greatly strengthen the paper. Given that these LLMs are all from the Meta family of LLMs -- this raises some suspicion as to the generality of the proposed technique when different architectures or modelling assumptions are made. (......) Do you have any results on other LLM families: Phi, MPT or Gemma?
We appreciate the reviewer's concern about the generalizability of our results across different model families. In response to the reviewer's suggestion, we conducted additional experiments on additional models including Phi and Gemma, which show qualitatively similar results (see Table 2 in the rebuttal pdf). Our results show that our findings still hold with different language models.
The meta family of LLMs are very standard transformers [1]. Indeed, the designers note that they tried to avoid innovating on the model architecture [2]. For example, the Llama model family has very standard features such as RoPE embeddings [3]. As such, we would not expect Llama family models to behave differently from other models.
### Reference:
[1] Llama 2: Open Foundation and Fine-Tuned Chat Models.
[2] The Llama 3 Herd of Models.
[3] RoFormer: Enhanced Transformer with Rotary Position Embedding | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their thoughtful feedback and valuable suggestions. We are pleased that our work has been recognised for its novelty (`b1op`,`vkoa`), simplicity and scalability (`z7Cy`), extensive experiments (`b1op`, `kinw`, `vkoa`), and potential impact (b1op). Reviewers also appreciated our intuitive explanations for the results (`z7Cy`,`vkoa`), the complementarity of our method with recent approaches like Neftune (`z7Cy`), our well-written reasoning and thorough appendix (`b1op`), and the consistency of our results across diverse settings (`kinw`). In response to the reviewers' comments, we have conducted additional experiments and analyses, which we believe significantly strengthen our paper. Key updates include:
1. **Additional experiments**: We have tested our approach on additional model families, including Gemma (2B) and Phi-1.5 (1.3B) (`z7Cy`, `b1op`, `vkoa`). We also conducted experiments comparing high-quality curated datasets with randomly selected subsets, showing that our approach is robust across various data qualities (`b1op`, `vkoa`).
2. **Clarification on instruction definition**: We have provided detailed examples of what constitutes an "instruction" in our work, including static templates, dynamic user instructions, and formatting tokens (`kinw`).
3. **Loss analysis and overfitting**: We elaborated on our loss analysis experiments and BLEU score analysis to address concerns about potential overfitting analysis (`kinw`).
4. **Comparison with continued pre-training**: We clarified the differences between our approach and continued pre-training, highlighting the unique aspects of IM (`kinw`).
5. **Limitations**: We have clarified the location of our paper checklist (`vkoa`) and added notes on computational restrictions preventing experiments with very large models (60-70B parameters) (`b1op`, `vkoa`).
We believe these additions and clarifications address the main concerns raised by the reviewers and improve the overall quality and impact of our paper. We will incorporate these changes in our revised version.
Pdf: /pdf/78bbc76e104df25aafd1655a1a15eea879408b73.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning | Accept (poster) | Summary: This paper develops a new analysis of autonomous goal selection based on "latent learning progress". The key idea is that agents seek to maximize progress on a latent variable rather than an observed variable like performance. The paper reports an interesting experiment with humans that seeks to test whether people use latent learning progress to guide their goal selection.
Strengths: - The core theoretcal idea (latent learning progress) is well-motivated and novel.
- The experiment is well-designed, interesting, and rigorously analyzed.
- The modeling approach is very thorough.
- The paper is clearly written.
- The literature review is comprehensive.
- Potentially impactful within cognitive science.
Weaknesses: - Latent learning progress wasn't formally defined until well into the middle of the paper, and there it was only operationalized in terms of the specific experimental setup rather than something more general.
- While motivated by work in AI, the paper isn't really written in a way that will have a broad appeal to an AI audience. I'm not sure how much impact this work will have at least within AI.
Minor:
p. 1: "Along factors" -> "Along side factors"
p. 4: "in which factor" -> "in which factors"; "combination of factors" -> "combinations of factors"
Technical Quality: 4
Clarity: 3
Questions for Authors: - Can the authors provide a more general formal definition of latent learning progress?
- How can latent learning progress be defined in such a way that it's likely to be both general and useful? My concern is that there are probably many versions of this which are not useful, depending on how one defines the latent variable whose progress is being monitored.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately described the paper's limitations. I don't see any potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our submission, and address their comments individually below.
---
**Weaknesses**
*Latent learning progress wasn't formally defined until well into the middle of the paper, and there it was only operationalized in terms of the specific experimental setup rather than something more general.*
We provide an intuitive explanation of LLP in the introduction, and save a formal definition for later in the paper, where all other related concepts have been introduced, to facilitate the reader’s understanding of such definition. The point about a generalized version of the LLP definition was addressed in the global response.
---
*While motivated by work in AI, the paper isn't really written in a way that will have a broad appeal to an AI audience. I'm not sure how much impact this work will have at least within AI.*
As noted above, NeurIPS welcomes submissions from “Neuroscience and cognitive science” – therefore, we invite the reviewer to consider our contribution to such fields. That said, we provide several connections to the artificial intelligence and machine learning literature in the Introduction, Related Work, and Discussion sections of the article, and would welcome the reviewer’s suggestions to further bridge across different fields.
---
**Minor**
*p. 1: "Along factors" -> "Along side factors"*
*p. 4: "in which factor" -> "in which factors"; "combination of factors" -> "combinations of factors"*
Minor typos have been fixed in the camera-ready version.
---
**Questions**
*Can the authors provide a more general formal definition of latent learning progress?*
This point was addressed in the global response.
---
*How can latent learning progress be defined in such a way that it's likely to be both general and useful? My concern is that there are probably many versions of this which are not useful, depending on how one defines the latent variable whose progress is being monitored.*
This point was addressed in the global response.
---
**Limitations**
*The authors have adequately described the paper's limitations. I don't see any potential negative societal impacts.*
We included potential societal impacts in our initial submission, but we expanded it in response to reviewer y4se: “As our knowledge of human goal setting becomes more precise, however, ethical concerns regarding the use of behavioral sciences in marketing and management should be addressed, particularly in cases where highly personalized methods of influence could transform advertising into manipulation.”
---
Rebuttal Comment 1.1:
Title: response
Comment: Thank you for these responses, which address my comments. My score was already high (8), and I don't think the work justifies a higher score (9, which would imply it is "groundbreaking"), but I will advocate for its acceptance.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear we were able to address all comments! Thank you again for the constructive feedback. | Summary: This paper examines how humans select between different possible goals. The authors designed a hierarchical reinforcement learning task where the main dependent variable was which goal participants chose to work on for each trial. Then, they built descriptive computational models to assess what parameters explained participants' goal choices throughout the duration of experiment. The parameters that were most predictive of the participants' goal selections were performance and latent learning progress.
The experimental task introduced in this paper bears a resemblance to Little Alchemy (https://littlealchemy.com/) insofar as participants combine together different elements to create new elements. Where the task differs is that, rather than freely combining elements without any specific goal in mind, the participants in this task had 6 (not including the testing phase) goal potions to create and they received feedback about whether or not they successful concocted their target potion for the trial.
Their findings contribute to understanding human learning and building autotelic machines. In terms of human learning, the paper provides new evidence that latent learning progress (rather than standard learning progress) drives people to chose some goals over others. In terms of building autotelic machines, new artificial agent models could incorporate latent learning progress when choosing goals to better mimic the adaptive and rapid learning of humans and animals.
Strengths: Goal selection (and intrinsic motivation more broadly) is a fundamental question in the study of both biological and artificial intelligence. This paper tackles a significant research question by using a novel experimental paradigm. The original paradigm introduced in this paper could also be modified for future work to address other questions like open-ended learning or could be made more broadly available online to test more diverse pools of participants in the future.
The writing is very clear through out (and any lingering questions I had about the experimental task were clarified effectively in the appendix).
Another wonderful strength of the paper is their ability to report on inter-individual differences. They are able to fit descriptive models to each individual subject.
Weaknesses: While this paper has a number of important strengths, the main weakness is that it is not clear how well these findings would generalize to different tasks (especially more real-world tasks):
- First, the experimental paradigm is very specific. Users have to pick from a limited list of goals with the explicit over-arching goal of being able to create all of the potions successfully. Conversely, people usually generate their own goals (e.g., "I'd like to read this book", "I'd like to go on cross-country trip to visit Aunt Ida") rather than picking from a specific list, and they often choose these goals without any supervised over-arching goal.
- Second, the participants were university students (mostly female) participating for course credit. Would the results generalize to different populations?
- Third, the measure of latent learning progress (LLP) is entirely specific to this experimental task. The authors operationally define latent learning progress as 1 - (N actions sequences tested / N possible action sequences). This measure only applies in situations where there are a finite number of options to try. How can future researchers extend this notion of LLP to open-ended tasks that use high-dimensional sensory and motor spaces (where this operational definition would not work anymore)?
To be clear, I don't expect the authors to design a new experimental paradigm or test a new demographic range of participants during the rebuttal period. But it would be helpful for them to address these as limitations.
On a separate note, the statistical analyses need effect sizes. (For example, a simple Cohen's d for each Wilcoxon / t-test would be suffice.) Some of these differences in task performance are statistically significant because p < .05, but are not necessarily behaviorally significant because the difference (effect size) is so small. Also, if I'm reading the stats correctly, the results in text do not match Fig 2A: Complex hierarchical G3 mean performance is reported as .37, but the bar in Fig 2A looks like it couldn't be any higher than .3.
Minor:
- The in-text references are sometimes formatted a little strangely in terms of the number (for example citing 12-17, 11 instead of citing 11-17 or citing 1, 33, 34, 3, 35, 5, 6, 36, 37, 8 instead of citing 1, 3, 5, 6, 8, 33-37).
- The explanation of the experimental game (section 3.1) is not that clear. The figure (Fig 1) is definitely helpful, and the description in the appendix clarifies the game really well. You might consider reformatting the paragraph describing all of the goals into a list format.
- Figure 3 is difficult to understand as-is, but I think it might actually be a really interesting figure. Part of what makes it unclear is the y-axis: What is the "action sequence index"? I suspect there might be an alternative to a dot plot that would be easier to read (maybe even forgoing the y-axis and showing a single strip of color-coded trials for each subject perhaps?). It would also help the readers to add some descriptors to each graph so the reader knows which elements to look for (e.g., setting a few objectives rather than trying all goals, unprincipled switching between goals).
Technical Quality: 4
Clarity: 4
Questions for Authors: My questions are covered in the Weaknesses portion.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See Weaknesses section.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and their extremely detailed, helpful feedback. We address their suggestions for improvement below.
---
**Weaknesses**
*First, the experimental paradigm is very specific. Users have to pick from a limited list of goals with the explicit over-arching goal of being able to create all of the potions successfully. Conversely, people usually generate their own goals (e.g., "I'd like to read this book", "I'd like to go on cross-country trip to visit Aunt Ida") rather than picking from a specific list, and they often choose these goals without any supervised over-arching goal.*
We completely agree with the reviewer! In fact, we noted “In our task, people chose goals from a predefined menu of options. Although this facilitates the study of goal setting, people often invent their own goals by combining observations and imagination”. Having a highly controlled setup in which to study goal selection was essential for a first understanding of people’s approach, and our clustering analyses and plots of individual behaviors highlight vast variability in goal selection strategies even in such a limited space. Building on our initial findings, it may be possible to start exploring how people create and pursue more free-form goals (if not as rich as the examples mentioned by the reviewer, at least more self-designed than the ones we studied here). We note that our is a standard approach to studying a novel, complex question: we start from a simplified experimental design to establish findings in well controlled setting, and plan to adapt our approach to increasingly more naturalistic, complex settings to pursue a more nuanced understanding.
---
*Second, the participants were university students (mostly female) participating for course credit. Would the results generalize to different populations?*
This point was addressed in the global response.
---
*Third, the measure of latent learning progress (LLP) is entirely specific to this experimental task. The authors operationally define latent learning progress as 1 - (N actions sequences tested / N possible action sequences). This measure only applies in situations where there are a finite number of options to try. How can future researchers extend this notion of LLP to open-ended tasks that use high-dimensional sensory and motor spaces (where this operational definition would not work anymore)?*
This point was addressed in the global response.
---
*On a separate note, the statistical analyses need effect sizes. (For example, a simple Cohen's d for each Wilcoxon / t-test would be suffice). [...].*
We now provide the effect size for all Wilcoxon tests as the standardized effect size r (Z/√N; Rosenthal et al., 1994; see attached PDF, “Updated statistics with effect sizes”). Note that we have replaced the symbol “Z” with “W” for non-standardized test statistics in Wilcoxon tests.
---
*Also, if I'm reading the stats correctly, the results in text do not match Fig 2A: Complex hierarchical G3 mean performance is reported as .37, but the bar in Fig 2A looks like it couldn't be any higher than .3.*
We are grateful to the reviewer for noticing this mismatch! The stats are correct, and we have updated the figure to match them (see attached PDF, “Updated Figure 2”). The previous figure was also correct, but aggregated the data slightly differently. The two are now consistent.
---
**Minor**
*The in-text references are sometimes formatted a little strangely in terms of the number [...].*
We thank the reviewer for noticing the citation formatting error, which we have fixed for the camera-ready version.
---
*The explanation of the experimental game (section 3.1) is not that clear. The figure (Fig 1) is definitely helpful, and the description in the appendix clarifies the game really well. You might consider reformatting the paragraph describing all of the goals into a list format.*
We thank the reviewer for their suggestion and for consulting the Appendix thoroughly. We had initially placed a full description of the task in the main body of the paper but had to move it to the Appendix due to space limitations. We intended to provide the reader with enough information about the task to understand the main takeaways and refer interested readers, with a more specific interest in cognitive science experiments, to details in the Appendix.
---
*Figure 3 is difficult to understand as-is, but I think it might actually be a really interesting figure. Part of what makes it unclear is the y-axis: What is the "action sequence index"? I suspect there might be an alternative to a dot plot that would be easier to read (maybe even forgoing the y-axis and showing a single strip of color-coded trials for each subject perhaps?). It would also help the readers to add some descriptors to each graph so the reader knows which elements to look for (e.g., setting a few objectives rather than trying all goals, unprincipled switching between goals).*
We thank the reviewer for their insightful analysis of Figure 3. “Action sequence index” refers to the unique combination of ingredients selected by the participants and their order (e.g., the first point would be [0, 1, 2, 3] – where 0 is the top-most ingredient on the screen, 1 the second, etc.). To improve clarity, we have relabeled the y-axis “Action sequence”, since the index is not defined in the text. We omitted the axis labels to avoid cluttering the figure, making it unnecessarily large, and overloading the reader with information. We used the preceding paragraph to guide the reader through specific aspects of the figure they should be focusing on, and would therefore find it redundant to repeat the information in the figure caption.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. I've reviewed your general response and your responses to me and the other reviewers.
I'm enthusiastic that your team plans "to adapt our approach to increasingly more naturalistic, complex settings to pursue a more nuanced understanding." Can you provide any more clarity about how the present study provides an entry point towards this goal? Once the settings are more naturalistic and complex, it seems like you won't be able to use the same computational definition of LLP, so you'll (computationally) need to start from "square one." Do you see the entry point as being the task, which could be adapted to be more open-ended? Or do you see the entry point as the overall theoretical idea of LLP?
In terms of effect sizes, thank you for including these for the revision. For future studies, I encourage the authors to consider effect sizes that aren't normalized by the number of participants. When effect sizes are measurements that are normalized by sample size, it obscures the actual magnitude of the effect. (The raw magnitude/size of the effect is the same whether you test 50 or 500 people; the difference is that 500 will give you more statistical power.) This comes back to my original comment: "Some of these differences in task performance are statistically significant because p < .05, but are not necessarily behaviorally significant because the difference (effect size) is so small."
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for taking into consideration all our responses.
We are grateful for their suggestion on effect sizes, which we will consider for future studies, although we decided to take a relatively standard approach here.
As for entry points into more naturalistic studies, we view both paths the reviewer mentioned as interesting! We think our study provides an entry point into the study of goals in more naturalistic settings in at least three ways, including the two proposed by the reviewer:
- [Feasibility] This study serves as an entry point in that it establishes the validity of LLP in a simple environment, preparing the ground for more complex situations.
- [Task] The task we developed in this study could serve as a baseline for future work. One possible extension of the task would involve participants coming up with their own “goal potions” rather than selecting them from a predefined menu. In this case, LLP could still be defined in a similar way, provided that the number of possible combinations to achieve the participant-defined goal is known. Another possibility would be to use data from online videogames to study LLP, as was previously done in studies on motivation and (the standard version of) LP ([Brändle et al., 2024](https://osf.io/vg8dz)).
- [Theory] The theoretical idea of LLP, could be computationally expanded to more complex internal models than simply the number of possible combinations.
We hope this further clarifies the points raised by the reviewer. | Summary: This paper looks at human goal selection during learning. A known useful signal for goal selection is learning progress (LP). LP measures performance from past observations, and is therefore only sensitive to measurable change in performance. The paper hypothesizes that goal selection is additionally driven by a “latent LP” (LLP) measure, which an agent infers from its knowledge of and interaction with the environment, and which therefore does not depend solely on observable performance improvements. Computationally, for each potential goal that can be set, LP is modeled as a value that is updated based on changes in the performance prediction error, while LLP is modeled as tracking how much of the solution space is currently unexplored (thus, it is a proxy for how close the agent is to solving the goal). A simple goal selection model is proposed under which goals are selected with probability given by a softmax function of goal-specific values. These values are a weighted sum of - among other factors - the aforementioned LP and LLP values. An experimental setup is then introduced in which participants were asked to iteratively select one of many goals and to attempt to solve that goal. The fact that participants had to actively select goals made participants’ goal selection behavior directly observable. Data from human participants reveals that the LLP factor is an important element in goal setting behavior.
Strengths: This is a very good paper. The writing is clear and the motivation and discussion of prior work in the introduction and section 2 is excellent. Understanding human goal selection is important. Not just in its own right, but also for human-AI interaction settings where better models of human behaviour can help to train better AI assistants or companions. Furthermore, a better understanding of how humans direct their learning may inspire future research on things like curriculum design for RL. The experiments presented in this paper are novel, rigorous, and clearly support the claim that LLP informs goal selection. Moreover, there is a wealth of additional information and additional results in the appendices.
Weaknesses: The definition of LLP introduced here is quite specific to a small and discrete space of action sequences. A more general definition or some preliminary discussion on how one could extend the current work to continuous or countably infinite spaces of action sequences would strengthen the work.
Technical Quality: 4
Clarity: 4
Questions for Authors: The goal selection model considers a weighted sum for combining the factors that contribute to the goal values. Did you consider or test more complex functions of the factors?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are discussed in the conclusion. The authors do mention that potential future ethical concerns should be addressed, but do not expand on what those concerns may be.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their excellent summary of our article and their thorough feedback. We address their suggestions for improvement below.
---
**Weaknesses**
*The definition of LLP introduced here is quite specific to a small and discrete space of action sequences. A more general definition or some preliminary discussion on how one could extend the current work to continuous or countably infinite spaces of action sequences would strengthen the work.*
This point was addressed in the global response.
---
**Questions**
*The goal selection model considers a weighted sum for combining the factors that contribute to the goal values. Did you consider or test more complex functions of the factors?*
In our initial analyses, we indeed considered more complex functions of the factors. . Specifically, we tried modeling more complex interactions in our current setup (e.g., interactions between performance and other goal selection motives), but did not find this procedure to improve fit. Moreover, we note that we already observe interesting dynamics with a simple model, which is preferable for interpretability. It is possible that more complex functions of the factors might be relevant in more complex settings, and this will be an important question for future research. Here, we follow the standard “Occam’s razor” approach of the lowest sufficient complexity of the model given our data (Wilson and Collins, 2019).
---
**Limitations**
*Limitations are discussed in the conclusion. The authors do mention that potential future ethical concerns should be addressed, but do not expand on what those concerns may be.*
We thank the reviewer for the opportunity to expand on such concerns, and we now state that : “As our knowledge of human goal setting becomes more precise, however, ethical concerns regarding the use of behavioral sciences in marketing and management should be addressed, particularly in cases where highly personalized methods of influence could transform advertising into manipulation.”
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification, and for addressing the potential ethical concerns. I agree entirely with your justification for using a weighted sum of factors.
I think this is a really strong paper, and agree with the authors' view that it is suitable for NeurIPS. Based on the authors' responses to my review and the other reviews, I have decided to raise my score.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that all points were clarified. Thank you for your constructive feedback and for supporting our contribution to NeurIPS! | Summary: This work presents a hypothesis of a latent learning process that can guide autotelic agents in goal selection. Human experiments provide evidence supporting this hypothesis.
Strengths: Autotelic agents represent an important research direction. This work provides evidence from human experiments on latent learning processes, which could later be used to develop effective and personalized learning progressions for human-like autotelic machines.
Weaknesses: 1. All the subjects of the human experiment are students, which may introduce bias.
2. The definition of $V_f$ for each kind of model is unclear: e.g. in "performance", what does "currently active goal" mean? Where does $r^t$ come from? What is the relationship between it and the goal?
3. The design of LLP, which involves selecting multiple action sequences, does not provide a complete explanation of how LLP can improve human choices or actions better than LP.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Hierarchy influences goal selection through latent learning processes (LLP). Will separating the evaluation of hierarchy and LLP affect the results?
2. Section 6 discusses individual differences. Has there been an analysis of the causes behind these differences?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer kdQC for examining our submission and recognizing its importance and potential impact on machine learning. In the rebuttal period, we hope the reviewer could also help clarify what would improve soundness and presentation beyond the point raised so far. Regarding contribution, we would like to stress the fact that NeurIPS welcomes submissions from “Neuroscience and cognitive science” – therefore, we invite the reviewer to consider our contribution to such fields. We address all of the reviewer’s stated concerns below.
---
**Weaknesses**
*All the subjects of the human experiment are students, which may introduce bias.*
This point was addressed in the global response.
---
*The definition of V_f for each kind of model is unclear: e.g. in "performance", what does "currently active goal" mean? Where does r^t come from? What is the relationship between it and the goal?*
The currently active goal is the goal a participant has chosen on a given trial. To make our statement clearer, we now say “selected goal” instead of “currently active goal”. $r^{t}$ comes from the feedback provided to participants (the target potion either fills up or remains empty), as specified in the same sentence: “1 for positive feedback, 0 for negative feedback”. Feedback is goal-contingent, such that participants only receive feedback relative to the selected goal. We clarify this further in the text: “On each trial, the utility of the selected goal with respect to performance is updated based on the goal-contingent feedback received on that trial $r^{t}$”.
---
*The design of LLP, which involves selecting multiple action sequences, does not provide a complete explanation of how LLP can improve human choices or actions better than LP.*
As mentioned in the paper, “unlike LP, LLP does not require observing performance improvements to provide informative signals about progress.” This would make LLP particularly useful in situations (mentioned in the paper) “where no external change is visible, yet some other form of progress toward the desired outcome is made. Imagine being tasked with identifying the correct sequence of numbers to open a combination lock, which you might attempt through trial and error. Throughout most of this scenario, repeated failures would yield no difference in performance, hence no empirical learning progress (as typically defined). Nonetheless, provided that the lock has a limited number of slots and numbers and that you can avoid repeating incorrect combinations, every attempt is a step toward the solution.”
---
**Questions**
*Hierarchy influences goal selection through latent learning processes (LLP). Will separating the evaluation of hierarchy and LLP affect the results?*
Hierarchical components are a design choice in our experimental paradigm. Removing the hierarchical feature would likely make it more difficult to tease LLP and LP apart. We find, however, that exactly how hierarchy and LLP relate to and influence each other is an interesting question. Our current findings suggest that “hierarchy impacts goal selection indirectly by enabling inferences and thus affecting LLP”.
---
*Section 6 discusses individual differences. Has there been an analysis of the causes behind these differences?*
The goal of the current paper was not to identify the sources of individual differences in goal selection. Noting that such differences exist is in fact a novelty of our submission. That said, we agree with the reviewer that the question of individual differences is worthy of further exploration, and we mention it would be interesting if future studies crossed findings similar to ours with participants’ data related to “demographics and cultural background, cognitive abilities, and psychopathology”.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, which addressed most of my questions.
The only unclear thing left is about the model fitting described in lines 208-215. Could you clarify what the fitted parameters and loss function are in this fitting process? How did you determine the 'responsibilities' of each model in explaining participants' behavior?
---
Reply to Comment 1.1.1:
Comment: We are glad that the reviewer found our response exhaustive, and thank them for bringing up an additional question. Due to space limitations, we did not fully expand on Hierarchical Bayesian Inference (HBI) in the manuscript and instead redirected the reader to the original article that introduces this model fitting method ([Piray et al., 2019](https://doi.org/10.1371/journal.pcbi.1007043)). The cited paper contains full details and information, as well as a [GitHub repository](https://payampiray.github.io/cbm) with the code to implement it.
Briefly, HBI performs concurrent model fitting and model comparison. It characterizes a population-level distribution of parameters from which individual estimates are drawn, in a way that is proportional to the probability of each subject’s data being generated by each model i.e., the model’s responsibility with respect to each subject. Large values of responsibility (close to 1) for a subject and model indicate the model is likely to be the best underlying model for the subject. The HBI algorithm comprises four steps, which are iterated on until stopping criteria are met: 1) calculate the summary statistics, 2) update posterior estimates over group parameters, 3) update the posterior estimate over each individual parameter, 4) update estimates of each model’s responsibility with respect to each individual subject. For individual parameters, we used the default priors of 0 for the mean and 6.25 for the variance. We refer the reviewer to the original article ([Piray et al., 2019](https://doi.org/10.1371/journal.pcbi.1007043)) for all mathematical details.
While relatively new, this method has been used successfully in several applications (at the time of writing, Piray et al., 2019 counts over 100 citations). HBI outperforms more traditional statistical tools, such as maximum likelihood model fitting, as it is less prone to overfitting and less likely to favor overly simplistic models (see [Piray et al., 2019](https://doi.org/10.1371/journal.pcbi.1007043)).
We hope this answer satisfies the reviewer. Given that all other points were addressed in our previous response, we hope the reviewer will consider increasing their score, or help us address any remaining questions by stating them in a comment. | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the reviewers’ positive and constructive feedback.
The reviewers highlighted several strengths of the paper, including the important research direction it falls in line with and its tackling a fundamental question. They noted the potential impact of our findings on research in cognitive science, but also the design of autotelic machines and the development of AI assistants or companions that better align with human goals. The reviewers also appreciated the novelty of both the theoretical idea and the empirical approach, which could be extended to further address questions on open-ended learning. According to the reviewers, the writing was clear, the supported evidence was comprehensive, and the analysis thorough.
The reviewers also noted some limitations. We address some common questions below, and invite reviewers and ACs to refer to individual responses for other points raised in the review process. We hope the provided answers and clarifications help improve our submission’s score wherever possible, and otherwise kindly ask the reviewers to express remaining concerns so we can best address them.
Reviewers kdQC selected “fair” as a contribution score, and HMNe mentioned that the paper’s appeal to an AI audience could be improved. First, we would like to point out that NeurIPS welcomes (and has welcomed in the past) submissions from “Neuroscience and cognitive science” (as mentioned in the [call for submissions](https://neurips.cc/Conferences/2024/CallForPapers)), where we find our paper makes the most immediate contributions. That said, we made sure to provide several connections to the artificial intelligence and machine learning literature in the Introduction, Related Work, and Discussion sections of the article. Reviewer Grub perfectly captured our positioning in terms of impact, stating that “In terms of human learning, the paper provides new evidence that latent learning progress (rather than standard learning progress) drives people to choose some goals over others. In terms of building autotelic machines, new artificial agent models could incorporate latent learning progress when choosing goals to better mimic the adaptive and rapid learning of humans and animals”.
Reviewers kdQC and Grub pointed out that our participants were students from a university cohort. We would like to note that this is standard practice in the field. We, as well as other researchers studying higher cognition in various domains, have observed that qualitative effects found in university student populations typically generalize well to a broader, more diverse population of healthy adults. The reviewers are, however, correct that we cannot guarantee that our findings will indeed generalize to the general population and that this will be more important as we consider individual differences alongside whole-group qualitative effects. We recognized this limitation in our initial submission: “However, our sample was restricted to a relatively homogeneous group of undergraduate university students.” We now also mention that “Future studies and computational models may extend participation to a broader subject pool and specifically address individual differences in goal selection and achievement [...]”. Nonetheless, LLP was a key factor across participant clusters. While the distribution of clusters might change across populations (i.e., different strategies might be more or less common in certain populations), we predict LLP to be widely used.
All reviewers noted the lack of a generalized definition of LLP beyond the scope of our experimental paradigm, especially with respect to situations where possible solutions are not finite. First, we note that while simplistic, our novel experimental paradigm is already quite complex relative to the existing setups. However, we agree with the reviewers this is a limitation of the work as it currently stands. As noted in the article, “While we provide a simple and task-specific formalization of LLP, a generalized definition is necessary to understand the differences between LLP and other intrinsic motivation signals and ease the implementation of LLP-based goal selection in autotelic machines”. However, our focus in this initial work was to provide evidence for LLP use in humans, and it was essential to establish this aspect to “inspire the establishment of even more precise signals for goal selection in both humans and artificial agents”. We also note that even in situations where infinite spaces are theoretically possible, humans often restrict possibilities to a countable amount, and can therefore keep learning by exclusion like participants in our study did. For example, in a search problem such as looking for one’s glasses, one could consider virtually infinite locations, but rarely does so – making our current definition of LLP easily applicable. In more complex, real-world situations, we speculate that people may rely on a different internal model of their environment and their non-observable progress as they attempt to solve it – however, future work will be needed to extend LLP in these situations and describe such internal model in full detail.
Overall, we find the reviewer’s feedback encouraging and constructive, and we look forward to continuing the discussion.
Pdf: /pdf/a73fdebf0f1b29c7a33e5eb24faa724446e64c85.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MiSO: Optimizing brain stimulation to create neural activity states | Accept (poster) | Summary: The authors proposed a framework for closed-loop brain stimulation to optimize the search of parameters in order to guide the measured brain responses towards certain predefined states. The proposed work is built on two blocks with the first one in charge of conducting latent space alignment to combine data sets acquired from different sessions. The second block is aimed at performing predictions on the brain responses by using a convolutional neural network.
Strengths: -A novel method for applying latent space alignment to merge neural activity across sessions in the field of brain stimulation.
-In terms of reproducibility, the implementation aspects are well-detailed in the manuscript, except for the source code which was not included.
Weaknesses: - The stimulation parameters have a large search space that is not defined.
- The modality to measure brain responses is not specified, and its variability will pose related but different challenges.
- The limitations are weakly addressed, and it's not clear in which situations the proposed method will fail.
- The deceptive scenarios in the search space are not mentioned.
Technical Quality: 2
Clarity: 2
Questions for Authors: -Could the authors justify the innovations given that 1) conducting latent space alignment and, 2) using a CNN to predict the effects of stimulation parameter combination have been already reported in the literature?
-It is not clear why 4,500 uStim configurations. Could the authors clarify the figure?
-The multi-session need mentioned is not supported with evidence. Why longer sessions cannot alleviate the problem?
-Are there any parameter combinations irrelevant? If not, support why each of the possible 4,500 configurations is relevant and their effects mutually exclusive.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: -It is not clear the situations in which the proposed method will fail. The deceptive scenarios in the search space are not mentioned.
-There are no examples or further information about the massive number of stimulation configurations or the extent of them that might have redundant effects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
> - The stimulation parameters have a large search space that is not defined.
In this work, a large search space means a combination of multiple electrodes among 96 electrodes. We can use any combination of 96 electrodes in theory. The number of possible stimulation patterns (i.e., the size of the search space) increases exponentially with the number of electrodes used in uStim.
In general, a large search space in brain stimulation includes the combination of other uStim parameters. Examples include current amplitude, frequency, duration, waveform (temporal patterns), and timing of stimulation. The combination of these parameters creates a large search space, which requires a strategic method to perform optimization.
> - The modality to measure brain responses is not specified, and its variability will pose related but different challenges.
We used electrophysiology recordings obtained from a chronically-implanted 96 electrode Utah array. Because the array was implanted 3 months before these experiments were conducted (and therefore had enough time to stabilize), our recorded signals were quite stable from one day to the next. Thus, the variability in recorded neural activity was in the range that our alignment and closed-loop framework could handle.
> - The limitations are weakly addressed, and it's not clear in which situations the proposed method will fail.
Please see the General Response to Reviewers, “Limitation of MiSO."
> - The deceptive scenarios in the search space are not mentioned.
Our CNN utilizes the spatial smoothness of the uStim effect over the multi-electrode array (a 10x10 grid of electrodes). Thus, one deceptive scenario in the search space is when there is no spatial structure in the data.
To illustrate this, we created the deceptive scenario by shuffling the spatial location of the electrodes (Fig. R2A: original neural activity dataset, Fig. R2B: shuffled dataset). The CNN model trained on the deceptive dataset resulted in poor generalization performance compared to the CNN trained on the original dataset (Fig. R2C). Because of the smoothness assumption of the CNN model, when holding out electrodes from the training data, the model trained with the deceptive scenarios expects to see the smoothness over the array space and fails to produce predictions with no spatial patterns. However, as shown in the paper, our data does not support the deceptive scenario and the CNN model generalized well to untested patterns.
> **Questions**
> - Could the authors justify the innovations given that 1) conducting latent space alignment and, 2) using a CNN to predict the effects of stimulation parameter combination have been already reported in the literature?
To our knowledge, this is the first study combining latent space alignment and a CNN model in the field of brain stimulation. While latent space alignment has been largely studied in the field of BCIs, to our knowledge, it has not been used for brain stimulation. We believe that the application of latent space alignment in brain stimulation enables the training of a large neural network model, creating novel approaches to advance the field of brain stimulation.
> - It is not clear why 4,500 uStim configurations. Could the authors clarify the figure?
The number 4,560 uStim configurations was calculated as “96 choose 2” (i.e., choose 2 among 96 electrodes).
> - Are there any parameter combinations irrelevant? If not, support why each of the possible 4,560 configurations is relevant and their effects mutually exclusive.
The 4,560 configurations are not mutually exclusive. As we reported in Fig. 3B and C, there is a spatial relationship, i.e. nearby electrodes tend to produce a similar effect. Therefore, there are configurations that produce similar uStim effects.
> - The multi-session need mentioned is not supported with evidence. Why longer sessions cannot alleviate the problem?
Multiple sessions are needed because of the limited number of trials we can carry out each day (~300-500 uStim trials and ~100 no-uStim trials) due to the animals’ satiation. This is a limitation widely shared by experimental labs performing experiments with behaving animals. Thus, we needed to integrate data across multiple sessions to have enough data to train a CNN model.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing all my questions and comments. Upon the incorporation of the relevant scenarios outlined in the general responses, the manuscript will be considered for acceptance. | Summary: In this work, the authors propose MiSO (Microstimulation Optimization), a closed-loop stimulation framework designed to optimally generate microstimulation patterns to drive neural activity towards target states. The use of many electrodes for stimulation presents a challenge due to the curse of dimensionality, leading to an exponential increase in the number of searchable patterns. The authors address this issue with two novel contributions:
1. They use latent space alignment method to concatenate data across recording days, taking into account the drift in neural states and aligning the neural activity of subsequent days to a reference day.
2. They test multiple neural network architectures, including Multi-Layer Perceptrons(MLP), Gaussian Processes (GP), and Convolutional Neural Networks (CNN), to predict the effect of stimulation for unseen patterns. Additional experiments show that spatially close electrodes can produce similar neural patterns, and CNNs are particularly effective at capturing this feature.
The authors demonstrate that using MiSO with double electrode stimulation patterns can effectively drive neural activity towards the target state.
Strengths: This is a well-written manuscript with interesting results. The authors tackle a significant challenge in the field of brain-computer interfaces (BCI): how to guide a population of neural activity towards a target neural state using large number of stimulation patterns. The use of a subspace alignment method is particularly noteworthy, as it enables the utilization of data across different days, which is crucial given the experimental and clinical limitations. Although the implementation of neural networks to predict the effect of stimulation in a closed-loop system has been proposed before (Rao, Current Opinion in Neurobiology 2019), and a similar methods have been used in mice (Ref 19) and non-human primates (NHPs,Ref 20) to my knowledge, this is the first work that applies subspace alignment and CNNs in a closed loop system in NHPs.
Weaknesses: 1. It is unclear why the authors do not fully integrate the CNN into the closed-loop system. From my understanding, the CNN generates predictions expected to produce unique responses to stimulation for unseen patterns. These patterns (Zp) are then used to find the optimal stimulation pattern. Another approach could be to solve the inverse of the CNN to identify the best input pattern that generates the target output pattern(e.g. Bashivan et al, Science, 2019), thus fully integrating the CNN into the closed loop. In the current setup, the CNN is only trained and tested once at the beginning, with the epsilon-greedy algorithm taking over—a method already explored in Ref 19. Can authors discuss why they have chosen this framework?
2. As mentioned in the discussion, the system is only tested with up to two electrodes at a time, significantly limiting the number of possible patterns to around 4,000. This problem size does not necessarily require a CNN to solve; previous methods, such as in reference 19, have used a larger number of electrodes without CNNs. The authors should discuss how they plan to scale this approach to a higher number of electrodes and the experimental and theoretical challenges involved.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The current paper reports only the L1 error as a metric for convergence. However, the speed of convergence is an important factor in evaluating an algorithm's success. Specifically, from Figure 4B, it is unclear when MiSO with double electrodes achieves convergence close to the target. Can authors show how fast each of the algorithms converge to target?
2. The authors should include an example of a neural target pattern in Figure 4. Similar to Fig S1, a visual representation of target activity is important.
3. Did the authors attempt to show visual stimuli to the monkey and recreate the response pattern using MiSO? If such data exists, it would be beneficial to include it.
4. What are the theoretical limits of control in this setting? Specifically, what is the maximum effective dimensionality of control that can be achieved using 96 channels of stimulation? As shown in Figure 3, the spatial distance affects the pattern differences.
5. What is the limitation of this system to massive state fluctuations of the brain? Can their linear Subspace alignment method capture that or they require a refit? Moreover, when do they decide to refit the CNN or add the current trials to it?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: In its current form, it is unclear how this system can advance the state of the art in BCI systems. The authors should discuss the challenges they face when increasing the number of channels. Additionally, given the small current of 25 µA used here, how do the authors think they can induce behavioral changes, especially since FEF saccades typically require currents around 50 µA? Lastly, if they increase the number of stimulation electrodes, what are the potential implications for tissue damage?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
> 1. It is unclear why the authors do not fully integrate the CNN into the closed-loop system…
Please see the General Response to Reviewers, “Framework design. 3."
> 2. As mentioned in the discussion, the system is only tested with up to two electrodes…
Please see the General Response to Reviewers, “Scalability of MiSO"
> **Questions**
> 1. The current paper reports only the L1 error as a metric for convergence. However, …
The reviewer is right that the speed of convergence is another important performance measure. MiSO with double electrode stimulation converged around the target activity (a red circle in Fig. R1A of the attached pdf file) in 12.3 trials on average (10, 17, and 10 trials for each session). We did not compute the same measure for MiSO with single electrode stimulation because some targets were intentionally set to be unreachable (cf. Fig. 4A) and therefore the algorithm did not converge.
Note that the convergence to a certain uStim pattern does not necessarily mean good performance. The uStim pattern which was initially best and achieved the target may not achieve the same performance later in the session, for example, due to changes in the uStim effect after repeated stimulation. Thus, the convergence measure we report here is based on the convergence to the target activity pattern rather than the convergence to a specific uStim pattern.
> 2. The authors should include an example of a neural target pattern in Figure 4…
We included a visual representation of neural target activity in the pdf file attached in the general rebuttal section (Fig. R1B). The neural activity shown in the figure is based on the firing activity of neurons during no uStim trials, which can be projected onto the latent target activity space to obtain the latent target value shown in Fig. 4A.
> 3. Did the authors attempt to show visual stimuli to the monkey and...
This is a great idea. In fact, this is a key motivation for why we developed MiSO, and our next step. We plan to define our target activity as a pattern related to a certain brain process and achieve that target with MiSO.
> 4. What are the theoretical limits of control in this setting? Specifically, what is …
The maximum effective dimensionality of control achieved with single electrode uStim was around 5 (Fig. R1C). As we reported in the paper, double electrode uStim produced novel activity (i.e., activity not produced by single electrode uStim), thereby likely increasing the effective dimensionality of control.
> 5. What is the limitation of this system to massive state fluctuations of the brain? Can their…
Please see the General Response to Reviewers, “Limitation of MiSO. 2."
> **Limitations**
> In its current form, it is unclear how this system can advance the state of the art in BCI systems. …
The reviewer is right that currents above 50uA typically produce saccades in FEF, a brain region immediately adjacent to our array implants. While inducing a monkey’s movement by customizing a stimulation pattern would be an interesting application, we would like to highlight that our ultimate goal is to influence the cognitive state of the animal without overtly driving behavior. Our uStim current range was designed to achieve this. Moore & Armstrong, Nature 2003 induced attentional modulation with FEF uStim by using the range of current we used in this paper. We believe that relatively lower current has the potential to induce diverse behavioral or brain state modulation without directly causing movement.
When using multiple electrodes, we carefully controlled the total current within a safe range (less than 50uA in our experiment) so as to not damage tissue. If the total current remains low, splitting it among more electrodes would not lead to damage (and in fact reduces peak current at any electrode). If the total current is allowed to increase with the number of stimulation electrodes, one way to minimize the potential for damage would be to restrict stimulation on adjacent electrodes (separated by 400 microns) to reduce the peak current at any spatial location and minimize the potential for damage.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for responding to all of my comments. I have no further comments. | Summary: The paper presents MiSO (MicroStimulation Optimization), a closed-loop framework designed to drive neural population activity towards specified states by optimizing stimulation parameters over a large parameter space. MiSO addresses the challenge of the large search space by: latent space alignment and a convolutional neural network (CNN) to predict brain responses to unseen stimulation parameters.
Strengths: - Leveraging latent space alignment to address the challenge of a large subspace is an interesting approach.
- Experimental evaluation on a primate subject enhances credibility to application of the approach in the real world.
- The paper is structured well.
Weaknesses: - The work is preliminary with limited subjects, baselines, model comparisons, and latent space alignment methods
- The applicability is tested on very less data and only on one primate subject, over 5 sessions, hence it is hard to gauge the effectiveness.
- Baselines are very less and simple.
- Latent space alignment method chosen is a classical Procrustes method, choice behind this not explained
- The applicability of this method looks to be in a very specialized field. The broader impact or directions, even in other stimulation protocols, like functional electrical stimulation (FES) in which this method could be applied is not explained. The paper claims to have a social impact, however, the effort in identifying the social impact outside the experimented area is not visible.
Technical Quality: 2
Clarity: 3
Questions for Authors: - L 98: How does MISO identify a reference latent subspace?
- How sensitive is this method to the choice of latent space alignment?
- Other latent space alignment methods such as canonical correlation analyses (CCA), manifold alignment, etc. exist which may be more relevant to integrate data from different sources(e.g., electrodes). Why do the authors use the particular method?
- Model comparisons were done only among MLP, Gaussian Process and Convnets, why not others, e.g., classical ML methods as well as other sequence modeling ones like LSTMN, GRU?
- Is there any time-delay embedding on the inputs?
- What is the computational complexity of the overall method? How does it scale with the number of electrodes?
- How robust are the CNN predictions to variability in neural responses or to potential noise in the data?
- How does the performance of MiSO degrade with increasing complexity of the parameter space or neural dynamics?
- What are the potential impacts of recording instabilities on the latent space alignment and subsequent model predictions?
- Providing a broader outlook and connecting its concepts to other similar areas and discussing its impact would be desirable. For, e.g., in the context of FES, the goal would be to map stimulation parameters to muscle responses rather than neural population activity.
- Please mention the limitations of the work more clearly
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are not stated properly
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weaknesses**
> - The work is preliminary with limited subjects, baselines, …
> - The applicability is tested on very less data and …
The reviewer is correct that we report the results from 5 closed-loop sessions in Fig. 2B. However, this is only a small subset of the data collected for this manuscript. In total, 11 sessions of data were required for Fig. 2B (1 to identify the reference latent space, 5 to calculate the average prediction, and then the 5 shown) and 15 sessions were required for Fig. 4B. In total, 26 sessions of data collection directly informed the results presented here, and hundreds of experimental sessions recorded across 3 different subjects were involved in building the framework and initial analyses for this project.
For model comparisons and latent space alignment method, please see the General Response to Reviewers, “Framework design. 1 and 2."
> - Baselines are very less and simple.
Given the limited number of trials we can carry out each day (~300-500 uStim trials and ~100 no-uStim trials) due to the animals’ satiation, we had to limit the number of baselines to meaningfully compare each approach. Thus, we selected the two baseline (no-uStim and random uStim) comparisons we believed were the most critical for the assessment of MiSO.
The no-uStim baseline is essential to assess that our stimulation method is effective and modulates the brain beyond natural brain activity fluctuations. Furthermore, including no-uStim trials helped to minimize the likelihood of tissue damage due to repeated stimulation. The random uStim baseline was essential to evaluate that our uStim pattern selection algorithm performed better than a random selection. We will test other baselines in future work.
> - Latent space alignment method chosen is a classical Procrustes method, …
Please see the General Response to Reviewers, “Framework design. 1."
> - The applicability of this method looks to be in a very specialized field. …
> - Providing a broader outlook and connecting its concepts to other similar areas…
The general framework of MiSO can be utilized with other stimulation modalities such as optogenetics, non-invasive brain stimulation methods, and even peripheral nerve stimulation methods such as FES. For example, FES is a technique used to stimulate nerves and muscles to help restore muscle function. This stimulation technique could be paired with MEG recordings to set specific activation targets for groups of muscles. With MiSO, we could specify such targets and then find the electrodes needed to be stimulated to generate the specified patterns of muscle activation.
> **Questions**
> - L 98: How does MISO identify a reference latent subspace?
To identify a reference latent subspace, we run one session only with no uStim trials. We collected this dataset to capture the natural covariance structure in the population activity with as many trials as possible (on the contrary, in a typical uStim session, only about 20% of trials were no uStim trials). With this dataset, MiSO used the FA method introduced in Section 2.2 to identify a reference latent subspace.
> - How sensitive is this method to the choice of latent space alignment?
> - Other latent space alignment methods such as canonical correlation analyses (CCA), …
> - Model comparisons were done only among MLP, Gaussian Process and Convnets, …
Please see the General Response to Reviewers, “Framework design. 1 and 2."
> - Is there any time-delay embedding on the inputs?
No, there is no time-delay embedding. With the reported CNN model, we did not take into account the temporal component of the input as well as the evolution of population activity. However, it is a promising direction we are interested in exploring in the future to improve the prediction performance.
> - What is the computational complexity of the overall method? How does it scale with the number of electrodes?
Please see the General Response to Reviewers, “Scalability of MiSO."
> - How robust are the CNN predictions to variability in neural responses or to potential noise in the data?
The CNN model is sensitive to the training data used and the training process. This is why we decided to use a bagging approach to stabilize the model training.
Also, due to changes in the neural activity produced by the same uStim parameters across sessions, the CNN prediction needs to be adjusted during the closed-loop session. This is where the online closed-loop framework becomes crucial. The epsilon greedy algorithm prioritizes the CNN predicted uStim patterns first. But if they do not produce the target activity patterns, it quickly switches to different uStim patterns.
> - How does the performance of MiSO degrade with increasing complexity of the parameter space or neural dynamics?
Please see the General Response to Reviewers, “Scalability of MiSO."
> - What are the potential impacts of recording instabilities on the latent space alignment and subsequent model predictions?
The latent space alignment method we used in MiSO is stable under various types of recording instabilities. In the original paper (Ref 16), the authors reported that the performance of BCI decoding with the Procrustes method was stable under recording instabilities such as tuning changes, drop-outs or baseline shifts. We observed similar results in our dataset. The top row of Fig. S1 shows the baseline firing rate shift in uStim induced activity. However, the baseline shift across days was resolved by aligning the latent space. The bottom four rows have similar color scales horizontally, demonstrating the stability across sessions produced by the latent space alignment.
> - Please mention the limitations of the work more clearly
Please see the General Response to Reviewers, “Limitation of MiSO."
---
Rebuttal Comment 1.1:
Comment: Thank you for your elaborate responses; I will increase my score.
I understand that the latent space alignment is conducted on individual time samples. Given this method, I am not sure about how it manages to ensure reliable alignment, considering the possible susceptibility to noise and variability across trials. Further, as I understand, the contribution of this work is on the experimental side for possible applications to BCI, without much methodological adaptation for their case. Primarily, the contribution lies in applying previously reported concepts (latent space alignment, and CNN for predicting stimulation parameters) to a new stimulation domain, that is, brain stimulation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful consideration of our response and further comments.
> I understand that the latent space alignment is conducted on individual time samples. Given this method, I am not sure about how it manages to ensure reliable alignment, considering the possible susceptibility to noise and variability across trials.
We appreciate the reviewer mentioning this, since this is something we worried about as well when designing MiSO. We found that the FA+Procrustes method provided stable latent encoding of the neural activity (see Fig. S1), thereby compensating for the noise and variability in neural activity across sessions. In addition, we analyzed whether the time course of the latent neural activity that unfolds in the population activity space is similar across sessions when our subject was performing the same behavior using the FA+Procrustes method. We confirmed that we were able to recover a similar time course of the latent neural activity across sessions. This is further supported by Degenhart et al., (2019), in which they used FA+Procrustes on Utah array recordings (same recording technology as we are using) to achieve stable BCI decoding performance across multiple sessions. Together, these results indicated that the aligned latent space captures a common subspace across sessions, encouraging us to develop MiSO with the FA+Procrustes method.
There may be ways to improve the latent space alignment, for example by taking into account the time course of neural activity during alignment (Nonnenmacher et al., *NeurIPS*, 2017). This could help to further minimize the effect of noise and variability in neural activity in the alignment procedure.
> Further, as I understand, the contribution of this work is on the experimental side for possible applications to BCI, without much methodological adaptation for their case.
The main contribution of this work is the development of the whole closed-loop stimulation framework (MiSO) with multiple integrated ML methods. The experimental contribution of our work was to validate the MiSO framework in non-human primate experiments. The key challenge was to perform the necessary computations in real time – to adaptively update the prediction and to choose the optimal uStim pattern on each experimental trial (approximately every 1.5 seconds). There were other potential design options (as described in the general rebuttal response), such as fully integrating the CNN model into the closed-loop optimization procedure. However, their running time was prohibitive for the closed-loop framework. As a result, we chose to use the epsilon greedy algorithm initialized with the CNN prediction. This approach achieved convergence to the target in 12.3 trials with a search space of over 4,560 double electrodes uStim patterns (as shown in Fig. 4), while updating its prediction using newly recorded neural activity after each experimental trial.
We have not yet made publicly available the MiSO code to preserve the double-blind review process. Upon publication, we will provide the code to replicate the entire framework design in other experimental setups.
> Primarily, the contribution lies in applying previously reported concepts (latent space alignment, and CNN for predicting stimulation parameters) to a new stimulation domain, that is, brain stimulation.
As mentioned above, our main contribution lies in designing the whole MiSO framework, which we then validated in closed-loop non-human primate experiments. Our work paves the way to advance brain stimulation techniques by continued development of the ML methods used in MiSO (such as the FA+Procrustes method, the CNN prediction model, and the epsilon greedy algorithm). | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive comments, which helped strengthen our submission. Here, we respond to comments shared by multiple reviewers.
## Limitation of MiSO
All reviewers inquired about the limitations of MiSO. We explain two main limitations of MiSO and how we can potentially address them.
1. **MiSO does not take into account trial-to-trial variance in uStim induced activity.**
The CNN model predicts the *average* induced activity by each uStim pattern. However, there is substantial trial-to-trial variance in the induced activity, which our model does not take into account. This limits the prediction performance of the model, thereby affecting the performance of MiSO.
One potential solution is to take into account the trial-to-trial variation of the neural activity before stimulation, which can be informative about the trial-to-trial variation of the neural activity after stimulation. This can be done using a dynamical model of the neural population activity with the uStim as inputs.
2. **MiSO is susceptible to uncontrolled drifts in neural recordings across days.**
MiSO defines a reference latent space using one session and uses the same latent space throughout multiple later sessions without recalibration. Its performance could decline as the neural activity drifts across days and becomes difficult to align to the reference latent space.
Our alignment method is designed to overcome recording instabilities, allowing MiSO to perform well over several weeks. For Fig. 2, our latent space alignment worked even though the data collection involved a 3-4 week gap between sessions. With even longer periods of time, a recalibration of the reference latent space and update of the CNN prediction may be needed to maintain stable performance.
## Scalability of MiSO
Yfry and C2zx asked about the scalability and computational complexity of MiSO. Here we focus on the two main components of MiSO: the CNN model and the epsilon greedy algorithm.
1. **The CNN model scales up well by leveraging the spatial structure in the uStim effect.**
The CNN trained on single and double electrode patterns predicted the effect of untested patterns by leveraging the spatial structure of the uStim effect over the multi-electrode array (a 10x10 grid of electrodes). uStim patterns involving an even larger number of electrodes are likely to also possess spatial structure. This can be leveraged in the same way by the CNN model for generalization, making it scalable to more complex multi-electrode stimulation patterns.
2. **The epsilon greedy algorithm takes a longer time to converge as the search space expands.**
Updates with the epsilon greedy algorithm happen only for online tested uStim patterns. As the total number of possible uStim patterns increases, the time necessary to update the uStim effect of all patterns increases.
One potential approach is to remove “redundant” patterns that produce similar CNN predictions and run the closed-loop algorithm with a reduced number of patterns. Another potential approach is to update the weights of the last linear layer of the CNN model online using the recorded online data. This procedure is computationally cheap and can update the predictions of both tested and untested uStim patterns.
## Framework design
We appreciate Yfry and C2zx suggesting alternate (and entirely valid) design choices for MiSO. We’ll take the opportunity to explain the rationale behind the current MiSO design.
1. **Latent space alignment with the Procrustes method**
Yfry suggested other alignment methods such as canonical correlation analysis (CCA), manifold alignment, etc. We chose the FA+Procrustes method because it is simple (i.e. suitable for fast online computations) and has been shown to work well in BCI applications (Ref 16). Other methods may be more computationally expensive or less established in this domain, except CCA (Gallego JA, et al., 2020). However, CCA is not a natural choice in the context of MiSO because there are not matched observations across days (i.e., trial 1 on day 1 does not necessarily correspond with trial 1 on day 2).
2. **uStim effect prediction with CNN**
Yfry asked about the performance of other models such as classical ML methods and sequence models. Through initial analysis, we identified spatial structure in the uStim effect over the multi-electrode array. Thus, in this project, we focused on models that can be designed to capture the spatial structure present in the data, in particular, GP and CNN.
We are also interested in extending this model to predict the temporal dynamics of neural activity. On top of the spatial structure, the CNN model has the potential to be extended along the temporal axis. When assessing this direction, the performance comparison with other sequence models such as LSTM and GRU will be especially fruitful.
3. **Closed-loop algorithm with epsilon greedy algorithm**
C2zx suggested fully integrating the CNN in the closed-loop procedure. We did not choose this approach because of the necessity of a fast online update to compensate for activity fluctuations during the closed-loop session.
The effect of the same uStim pattern can change across sessions, and thus the CNN prediction needs to be adjusted during a given closed-loop session. This requires a fast online update process like an epsilon greedy algorithm that refines the CNN prediction to match it to the uStim effects observed during the closed-loop session. Alternatively, one can update the CNN model and invert it at each iteration, but the associated computations cannot currently be performed fast enough for the closed-loop experiment. Eventually, we would like to have the CNN model fully integrated with the online system, but more work is needed to achieve a fast and reliable online integration.
Pdf: /pdf/b30bfbf7ab996a2c4837a18ae91fe42897f278ac.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
First-Order Minimax Bilevel Optimization | Accept (poster) | Summary: This paper proposes two novel algorithms, FOSL and MemCS, for multi-block minimax bilevel optimization problems, avoiding the high complexity of computing the second-order gradient. Their theoretical analysis is quite solid and their experiments show proposed algorithms have superior performance and robustness in applications.
Overall, the proposed algorithms are novel and well-justified with practical applications. However, further elaboration is needed regarding their practicality and efficiency.
Strengths: The paper is well-written and well-structured, with strong motivation and clear logic.
The authors provide a reformulated version of minimax bilevel optimization problems and demonstrate the gap between the reformulated and original problems. The convergence analysis is also solid and thorough.
They also validate the experimental results of FOSL and MemCS in Deep AUC Maximization and meta-learning cases respectively.
Weaknesses: Since the paper primarily claims the efficiency of the proposed first-order algorithms, the authors should discuss the scale of problems these algorithms are suitable for. For example, second-order methods may fail when dealing with a large number of variables. In contrast, first-order algorithms generally handle larger-scale problems better, but the paper lacks experimental comparisons in this regard.
Besides, the gradient calculation procedure is not mentioned in the paper. I believe it is quite important for incorporating optimization problems in neural networks.
There is also a typo in the title of algorithm 2: Cold-star
Technical Quality: 2
Clarity: 3
Questions for Authors: - Since the theoretical proof is based on various assumptions, are there any restrictions on FOSL and MemCS? For example, do they only handle convex optimization problems? Including the previously mentioned issue of the optimization problem scale, I think the applicable scope of the two algorithms should be discussed in detail.
- Could the authors include a comparison of the runtime differences between FOSL, MemCS, and second-order methods? This would better illustrate the effectiveness of the proposed methods.
- In the experimental results, FOSL and MemCS perform better. Is this because these two algorithms have a better gradient estimator? The authors should provide an explanation for this.
- Additionally, I would like to know how the gradient of the optimization problem is calculated. Is it from implicit differentiation, unrolling methods, or are there any direct analytical solutions of the gradient?
If my concerns are well addressed, I would be happy to raise my score.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer mptp for the time and valuable feedback!
**W1: Since the paper primarily claims the efficiency of the proposed first-order algorithms, the authors should discuss the scale of problems these algorithms are suitable for.**
A: From the perspective of scale, compared to deep AUC maximization, the application of meta-learning is comparably a larger-scale problem due to the vast number of tasks and number of variables. The need to calculate the second-order derivative introduces high computational cost to second-order methods (e.g. MAML), making it hard to scale in this problem. However, as a first-order method, our algorithm can scale well in the meta-learning setting. In Table 1 we show the memory consumption of our MemCS compared with MAML with a different number of lower-level update steps. As a second-order method, MAML's memory cost increases with lower-level update steps, whereas our MemCS algorithm maintains stable memory consumption. Additionally, Table 2 (referenced in the answer to Question 2) shows that the average iteration time of MemCS is only one-third that of MAML. This demonstrates that our algorithm is more efficient in terms of both memory and computational costs. We will conduct further studies on larger-scale problems in our future research.
Table 1. Memory cost in robust meta-learning application.
| Lower-level update step number | MAML | MemCS |
|:------------------------------:|:--------:|:--------:|
| t = 10 | 8560 MB | 7762 MB |
| t = 15 | 12006 MB | 7739 MB |
| t = 20 | 15478 MB | 7632 MB |
| t = 25 | 18922MB | 7817 MB |
| t = 30 | 22368 MB | 7444 MB |
Thanks for pointing out the typo and please see the response to the gradient calculation in **Answer to Q4**.
**Answer to Q1:** This is a good point. Our methods can handle non-convex optimization problems, because the overall objective function $\Phi(x):=F(x,y^*(x),z^*(x))$ is generally non-convex with respect to $x$. The concavity/convexity assumptions are made only for the maximization problem in $y$ and the minimization problem in $z$. These two optimization problems are much easier to solve than optimizing $\Phi(x)$ and usually satisfy the convexity property. This can be more clearly seen in our applications of meta-learning and deep AUC maximization. In our application of the robust meta-learning setting, the lower-level problem is optimizing a linear layer with cross-entropy loss, which satisfies the strong convexity assumption. The maximization in the upper-level minimax problem is a combination of a negative hinge function and a linear function, making it a concave function. In our deep AUC maximization application, the lower-level function uses square loss, which also satisfies the strong convexity assumption. The maximization in the upper-level minimax problem is a negative quadratic function, which is concave.
It is also worth mentioning that our assumptions, such as lower-level strong convexity, Lipschitz continuity, and bounded variance, are primarily made for theoretical analysis. These assumptions have also been adopted in existing theoretical works on bilevel optimization (e.g., [1][2][3]) and minimax bilevel optimization (e.g., [4]). For practical use, our algorithms can also be applied to broader applications where these assumptions may not hold; however, in such cases, the convergence guarantee may not be developed.
We will add the above discussions and the applicable score of our algorithms in the revision.
[1] Approximation methods for bilevel programming.
[2] Bilevel optimization: Convergence analysis and enhanced design.
[3] A framework for bilevel optimization that enables stochastic and global variance reduction algorithms.
[4] Multi-block min-max bilevel optimization with applications in multi-task deep AUC maximization.
**Answer to Q2:** Thank you for the suggestion. We have conducted a further evaluation of the average iteration time for our proposed algorithms and the second-order method in a robust meta-learning setting, as detailed in Table 2. The results demonstrate that our algorithms run significantly faster than the second-order method.
Table 2. Average iteration time of different algorithms on the robust meta-learning setting.
| Method | Second-order method | FOSL | MemCS |
|:----------------------:|:-------------------:|:-----:|:-----:|
| Average iteration time | 9.41s | 1.42s | 3.15s |
**Answer to Q3:** The reasons for the improved performance of FOSL and MemCS differ between the two applications. For deep AUC maximization, the answer is yes. In the implementation of the baseline method (mAUC-CT), second-order information is discarded, whereas we leverage this information through the calculation of the hyper-gradient. For robust meta-learning, our method is specifically designed to address the minimax bilevel optimization problem. This design allows for easy integration with rank-based methods, enhancing the robustness of training and consequently leading to better performance.
**Answer to Q4:** This is a good question. In this work, we use $\nabla \mathcal{L}^*(x) = \nabla_x F(x,y^*(x),z_{\lambda}^*(x)) + \lambda(\nabla_x G(x,z_{\lambda}^*(x)) - \nabla_x G(x,z^*(x)))$ as the gradient to update $x$, which is a first-order approximate of the vanilla hypergradient $\nabla \Phi(x)$. Proposition 4.7 shows that the gap between $\nabla \mathcal{L}^*(x)$ and $\nabla \Phi(x)$ is proportional to $1/\lambda$, and hence can be made sufficiently small by choosing the regularization parameter $\lambda$ large enough. Since $y^*(x),z_{\lambda}^*(x),z^*(x)$ can not be obtained directly, we use $\nabla_x \mathcal{L}(x_{t},y_{t},z_{t},v_{t}) = \nabla_x F(x_t,y_t,z_t) + \lambda(\nabla_x G(x_t,z_t) - \nabla_x G(x_t,z_t))$ to approximate $\nabla \mathcal{L}^*(x)$ at the $(t+1)_{th}$ iteration.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for your detailed explanation. I will raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer mptp,
Thanks so much for your updates and for raising your score. We are happy that our responses clarify your questions. We will take your suggestions into our revision.
Best, Authors | Summary: This work proposes two novel fully first-order algorithms, named FOSL and MemCS, for multi-block minimax bilevel optimization problems. Specifically, the authors reformulate the lower-level problem as a value-function-based constraint and transform the minimax bilevel optimization into a surrogate minimax problem. FOSL and MemCS are proposed to solve the surrogate minimax problem by alternately updating the parameters through SGD. Theoretical analysis of the convergence is conducted and extensive experiments on deep AUC maximization and rank-based robust meta-learning show the effectiveness of the proposed method.
Strengths: The proposed method is sound and efficient.
Weaknesses: 1. I have concerns about the soundness of reformulating the minimax bilevel optimization as a minimax problem in Eq. 2. I hope the authors can conduct more analysis or provide some references that utilize the same technique.
2. I have a question about Proposition 4.9. Will it guarantee that $E|| y_{t+1}-y^*(x_{t+1}) || - E|| y_{t}-y^*(x_{t}) || \le 0$?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer pAAu for the time and valuable feedback!
**W1: I have concerns about the soundness of reformulating the minimax bilevel optimization as a minimax problem in Eq.(2). I hope the authors can conduct more analysis or provide some references that utilize the same technique.**
A: This is a good question. The lower level problem in Eq.(1) aims to find an optimal solution $z_i^*$ of $g_i(x,z_i)$. In Eq. (2), the lower-level problem is converted into a constraint $g_i(x,z_i) - g_i(x,z_i^*) \leq 0$. Since $g_i(x,z_i)$ has a unique minimizer $z_i^*$, this constraint is satisfied if and only if $z_i=z_i^*$. As a result, the objective function in Eq. (2) becomes $\min_x\max_y \frac{1}{n}\sum_{i=1}^nf_i(x,y,z_i^*)$, which is the same as that in Eq. (1). Similar techniques also appear in [1][2][3].
[1] On solving simple bilevel programs with a nonconvex lower level program.
[2] A fully first-order method for stochastic bilevel optimization.
[3] A value-function-based interior-point method for non-convex bi-level optimization.
**W2: I have a question about Proposition 4.9. Will it guarantee that** $E\\|y_{t+1}-y^*(x_{t+1})\\| - E\\|y_{t}-y^*(x_{t})\\| \leq 0$?
A: Proposition 4.9 does not guarantee that $\mathbb{E}\\|y_{t+1}-y^*(x_{t+1})\\| - \mathbb{E}\\|y_{t}-y^*(x_{t})\\| \leq 0$ for **all** $t$. This is due to the existence of the positive error terms $O(\frac{\eta_y^2}{|I_t|})(\sigma_f^2+\sigma_{th}^2)$, $\mathcal{O}(\eta_y)\frac{1}{n}\sum_{i=1}^n\mathbb{E}\\|v_{i,t}-z_i^*(x_t)\\|^2$, $\mathcal{O}(\eta_x^2/\eta_y)\mathbb{E}\\|\tilde{h}_x^t\\|^2$, $\mathcal{O}(\eta_x^2)\mathbb{E}\\|h_x^t\\|^2$.
These terms have no direct correlation with the negative term $-\mathcal{O}(\eta_y)\mathbb{E}\\|y_{t} - y^*(x_{t})\\|^2$, and hence it cannot guarantee that their summation together is negative.
Instead, this inequality can be understood as follows. By rearranging Proposition 4.9, we have
$$
\mathbb{E}\\|y_{t+1}-y^*(x_{t+1})\\|^2 \leq (1-\mathcal{O}(\eta_y))\mathbb{E}\\|y_{t}-y^*(x_{t})\\|^2 + O\bigg(\frac{\eta_y^2}{|I_t|}\bigg)(\sigma_f^2+\sigma_{th}^2) +\mathcal{O}(\eta_y)\frac{1}{n}\sum_{i=1}^n\mathbb{E}\\|v_{i,t}-z_i^*(x_t)\\|^2 + \mathcal{O}(\eta_x^2/\eta_y)\mathbb{E}\\|\tilde{h}_x^t\\|^2 + \mathcal{O}(\eta_x^2)\mathbb{E}\\|h_x^t\\|^2.
$$
This indicates the decay of the optimality distance $\mathbb{E}\\|y_{t}-y^*(x_{t})\\|^2$ over iterations. The positive error terms $O(\frac{\eta_y^2}{|I_t|})(\sigma_f^2 + \sigma_{th}^2)$ and $\mathcal{O}(\eta_x^2)\mathbb{E}\\|h_x^t\\|^2$ are negligible in the final convergence analysis because they are proportional to the square of the stepsizes $\eta_x$ and $\eta_y$. By choosing sufficiently small stepsizes, these terms can be made sufficiently small (e.g., see the proofs of Theorem 4.10). The error term $\mathcal{O}(\eta_y)\frac{1}{n}\sum_{i=1}^n\mathbb{E}\\|v_{i,t}-z_i^*(x_t)\\|^2$ can be merged into the descent term of iterate $v_{i,t}$ (see the details in Lemma E.3). The error term $\mathcal{O}(\eta_x^2/\eta_y)\mathbb{E}\\|\tilde{h}_x^t\\|^2$ can be canceled out by the negative term $-\mathbb{E}\\|\tilde{h}_x^t\\|^2$ in Proposition 4.8 for stepsize $\eta_x$ sufficiently small. | Summary: This work studies the multi-block minimax bilevel optimization problem. To address the high computation costs and high memory consumption issues of existing algorithms, this work proposes two fully first-order algorithms, i.e., FOSL and MemCS. Specifically, the authors convert the original minimax bilevel problem into a simple minimax problem and propose FOSL (first-order single-loop algorithm) to solve the optimization problem efficiently. Then, they introduce MemCS with cold-start initialization, which avoids storing the weights of blocks and reduces memory consumption. Moreover, the authors also provide convergence analysis for the proposed algorithms. Experimental results on the benchmark datasets show that the proposed method can achieve better or comparable performance than the baseline.
Strengths: 1. The authors convert the original minimax bilevel problem into a simple minimax problem and solve it by first-order single-loop algorithm.
2. The authors provide a comprehensive convergence analysis on the proposed method.
3. The proposed method is easy to converge and outperforms the baselines on CIFAR-100, CelebA and OGBG-MolPCBA.
Weaknesses: 1. AUC-CT avoids the calculation of second-order matrices and shows comparable efficiency to the proposed first-order algorithm, which reduces the contribution of this work to improving algorithm efficiency. It would be better to provide more discussions.
2. It would be better to provide the memory consumption of baselines for better comparisons.
3. The proposed method only achieves comparable performance to mAUC-CT on CheXpert. It would be better to provide more analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: My main concern is about the contribution of this work to computational efficiency (especially compared to AUC-CT).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I cannot find the discussion about the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer Mt97 for the time and valuable feedback!
**W1: AUC-CT avoids the calculation of second-order matrices and shows comparable efficiency to the proposed first-order algorithm, which reduces the contribution of this work to improving algorithm efficiency. It would be better to provide more discussions.**
A: In the theoretical sections of [1], the authors proposed approximating the hypergradient by estimating second-order matrices. However, in their experiments, they discarded the second-order information for implementation efficiency, leading to a larger approximation error. In contrast, we proposed to address the minimax bilevel optimization problem in a first-order manner, proving that our algorithms maintain a bounded approximation error. This also explains why our method achieves a higher accuracy than AUC-CT in the experiments.
[1] Multi-block min-max bilevel optimization with applications in multi-task deep AUC maximization.
**W2: It would be better to provide the memory consumption of baselines for better comparisons.**
A: Thank you for bringing up this point. We evaluated the memory cost of our MemCS method compared to the baseline method in the robust meta-learning setting on the Mini-ImageNet dataset, using the same training configuration as outlined in Table 1. Our results indicate that the MemCS method achieves better performance and robustness than MAML with lower computational cost. Additionally, as the number of lower-level update steps increases, MAML’s memory cost rises significantly, while MemCS maintains consistent memory usage, demonstrating the scalability of our first-order algorithm. Due to limitations in time and computational resources, we plan to include a more detailed comparison of different settings in a future revision of our paper.
Table 1. Memory cost in robust meta-learning application.
| Lower-level update step number | MAML | MemCS |
|:------------------------------:|:--------:|:--------:|
| t = 10 | 8560 MB | 7762 MB |
| t = 15 | 12006 MB | 7739 MB |
| t = 20 | 15478 MB | 7632 MB |
| t = 25 | 18922MB | 7817 MB |
| t = 30 | 22368 MB | 7444 MB |
**W3: The proposed method only achieves comparable performance to mAUC-CT on CheXpert. It would be better to provide more analysis.**
A: Thank you for highlighting this phenomenon. The mAUC-CT[1] implementation employs a simpler structure that directly avoids second-order derivative computations, which may be advantageous for the large-scale CheXpert dataset. However, this simplification introduces an additional approximation error, resulting in the mAUC-CT method experiencing significant variance ($\pm 0.1495$), as shown in Table 1 of the main text. In contrast, our algorithm achieves a much smaller variance ($\pm 0.0051$) compared to mAUC-CT.
[1] Multi-block min-max bilevel optimization with applications in multi-task deep AUC maximization. | Summary: The paper introduces FOSL and MemCS that are two new first-order algorithms for multi-block minimax bilevel optimization, demonstrating superior sample complexities and robust performance in empirical evaluations on deep AUC maximization and robust meta-learning applications.
Strengths: The paper introduces FOSL, a novel fully first-order single-loop algorithm for minimax bilevel optimization, simplifying the problem structure and achieving competitive sample complexity without second-order computations. MemCS is proposed as a memory-efficient method with cold-start initialization, addressing challenges in scenarios with numerous blocks and demonstrating improved sample complexity compared to traditional methods.
Weaknesses: 1. The assumptions look quite strong, e.g., Assumption 5.3 requires strong convexity, Assumption 5.4 requires several boundnesses on the derivatives and high-order derivatives, which usually are not satisfied in practice.
2. It appears that the memory costs have not been compared.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer iRgq for the time and valuable feedback!
**W1: The assumptions look quite strong, e.g., Assumption 5.3 requires strong convexity, and Assumption 5.4 requires several boundnesses on the derivatives and high-order derivatives, which usually are not satisfied in practice.**
A: Thanks for your question! Assumptions 5.3 and 5.4 have been widely adopted by existing works on bilevel optimization, such as [1][2][3]. The same set of assumptions in our paper have also been made by other studies on multi-block minimax bilevel optimization such as [4].
This is also easily verified in our applications of robust meta-learning and deep AUC maximization. In the robust meta-learning setting, the lower-level problem is optimizing a linear layer with cross-entropy loss, which satisfies the strong convexity assumption. The maximization in the upper-level minimax problem is a combination of a negative hinge function and a linear function, making it a concave function. In deep AUC maximization application, the lower-level function uses square loss, which also satisfies the strong convexity assumption. The maximization in the upper-level minimax problem is a negative quadratic function, which is concave. We intend to investigate relaxed assumptions in our future study.
[1] Approximation methods for bilevel programming.
[2] Bilevel optimization: Convergence analysis and enhanced design.
[3] A framework for bilevel optimization that enables stochastic and global variance reduction algorithms.
[4] Multi-block min-max bilevel optimization with applications in multi-task deep AUC maximization.
**W2: It appears that the memory costs have not been compared.**
A: Thank you for highlighting this point. We evaluated the memory cost of our MemCS method against the baseline method in the robust meta-learning setting on the Mini-ImageNet dataset, using the same training configuration as shown in Table 1 below. Our results demonstrate that the MemCS method achieves better performance and robustness compared to MAML, with lower computational costs. Additionally, as the number of lower-level update steps increases, MAML’s memory cost rises significantly, whereas our MemCS method maintains consistent memory usage, showcasing the scalability of our first-order algorithm. Due to limitations in time and computational resources, we will provide a more detailed comparison of different settings in a future revision of our paper.
Table 1. Memory cost in robust meta-learning application.
| Lower-level update step number | MAML | MemCS |
|:-----------------------:|:--------:|:--------:|
| t = 10 | 8560 MB | 7762 MB |
| t = 15 | 12006 MB | 7739 MB |
| t = 20 | 15478 MB | 7632 MB |
| t = 25 | 18922MB | 7817 MB |
| t = 30 | 22368 MB | 7444 MB | | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models | Accept (poster) | Summary: The paper introduces a novel concept called the “safety landscape,” which assesses the safety of generative language models. Within this landscape, the “safety basin” is defined as a safe local neighborhood around a model’s parameters. The key contribution is the introduction of a new metric called “Visage” that probes this safety landscape to determine how robust a model is against malicious fine-tuning. The authors present experimental results in both 1D and 2D safety landscapes, using various open-source language models, to demonstrate how Visage can help determine how robust models are against malicious finetuning and, in turn, help build more secure models.
Strengths: The paper is well-written and presents a novel approach to model safety. The introduction of the safety landscape and safety basin concepts offers a novel perspective on evaluating model robustness. The introduced Visage metric is particularly valuable as it may aid in constructing models that are resistant to malicious fine-tuning in the future. This is especially significant for powerful open-source models, which are more vulnerable to such attacks. The experimental validation in both 1D and 2D safety landscapes provides a clear demonstration of the effectiveness of Visage.
Weaknesses: - The paper uses the broad term “safety” to primarily describe a model’s refusal to answer potentially harmful queries. However, safety does not necessarily mean refusing to answer. For example, the authors use a refusal keyword detection mechanism to evaluate the safety landscape, but this approach has limitations. Safe responses can vary widely depending on the context, and simply refusing to answer can sometimes be considered unsafe (e.g. in the context of advice on self-harm). Further it does not cover context depending unsafe responses such as a response describing attributes of specific individuals or minority groups, which can be safe or unsafe depending on the content.
- Therefore the discussion on limitations could be more detailed.
Minor comments:
- Some related work is missing, such as https://openreview.net/pdf?id=6t0Kwf8-jrj, https://arxiv.org/pdf/2312.06681.
- Missing x-axis label in Figure 2 and labeling in Table 1 could be improved.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Table 1, it is unclear what the “Aligned” column refers to. Can the authors clarify this?
- The paper's discussion of limitations is somewhat brief. Could the authors elaborate on the limitations of their current approach, including any potential weaknesses or challenges? This would also provide insights into potential future research to advance this interesting study.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The descriptions of limitations could be improved, see questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for acknowledging the significance of our findings and contributions! We hope the following clarifications can address the reviewer's concerns.
1. **Different refusal evaluation methods other than keyword search.**
We agree with the reviewer that safety does not necessarily mean refusing to answer. Thus, we have expanded our experiments by testing an additional evaluation metric, Llama Guard 2. Our results demonstrate that the LLM safety basins exist regardless of the harmfulness evaluation metrics. Please check General Response -> Concern 1 for more details.
2. **Clarifications on x-axis label in Fig. 2 and Table 1.**
The x-axis represents the scalar parameter $\alpha$ in Eq. 1, indicating the amount of perturbation added to the model’s original parameters. We have added annotations to the x-axis in all figures in the attached rebuttal PDF in General Response. We will also include these annotations in the revised version of the original paper. In Fig. 2a, the origin represents the Llama2-7B base model, and x-axis = 1 represents the Llama2-7B-chat model. In Fig. 2b, the origin represents the unperturbed model (Llama2-7B-chat), and all other points represent the measurement of ASR while perturbing the model weights along positive or negative directions. The “aligned” column in Table 1 refers to the original off-the-shelf models. We will clarify these annotations in the final version.
3. **More discussions on limitations and future work.**
We believe there are multiple directions for future research, and our work is an important first step in exploring the safety landscape of popular open-source LLMs. Given our findings in the General Response, where we observe that the shape of the LLM capability landscape differs significantly from that of the LLM safety landscape, a potential direction for future work is to explore how to better balance the tradeoff between capability and safety, e.g., finding the optimal capability performance for a given dataset while staying within the safety basin. Another direction to explore, inspired by Reviewer h1px, is proposing additional sub-metrics such as basin width, depth, and smoothness. Our VISAGE score, defined as the average safety margin of all models sampled along random directions, can be considered as an average depth within the safety basin. The VISAGE score is a byproduct of our novel findings on the safety basin, and we hope our study will inspire further research into proposing more metrics, including width and smoothness.
We sincerely thank the reviewer for all constructive feedbacks, and we will add the two related works in the “LLM safety alignment” of Sec. 2 in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments and clarifications. These have addressed my remaining concerns.
I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments and questions! We are grateful for your engagement in the rebuttal process. We are glad that we have addressed all your concerns! We will add those additional experiments and clarifications in the final version. | Summary: Inspired by the work of visualizing loss landscapes, the authors of this paper ask if there is a similar geometric interpretation of the weight space of LLMs and their respective vulnerability to answering unsafe questions. They provide a novel set of tools for perturbing the weight space of models, either along random directions or interpolated between two models. Based on these tools they suggest that there might be safety basins, a maximum radius after which weight perturbation quickly recovers unsafe behaviour and provide a measure of this radius called VISAGE. They observe that both model type and system prompts can have a large impact on the VISAGE score.
Strengths: The notion of safety basins is a very interesting and novel phenomena which I think might have merit for understanding how to improve training-time and adversarial safety (make the basins larger!).
Regardless of whether safety basins actaully exist or not, the biggest strength of the paper is the development of tools for understanding weight perturbation and safety from a geometric perspective and I expect this to become a very important training dynamic analysis tool w.r.t to safety.
Weight perturbation as a defence against GCG and PAIR is also an interesting notion that they demonstrate empirically.
Weaknesses: There is a conceptual clarity issue in the paper where loss landscape and “landscape drawn by ASR and weight perturbations” are being confused and muddled up in the motivations and through the paper when works drawing conclusions about the loss landscape are cited (IMO these should be removed as they are not relevant). I would advise the authors to thoroughly distinguish the two and make it clear to the reader how they are different and that your paper is not talking about loss landscapes. Along these lines “Model Landscape” is a very confusing and unclear term to me - Perhaps the clearest is attack success-weight landscape…
At this point, the paper has a critical experimental flaw: Since the ASR metric is a refusal keyword detector, there are many many alternative explorations for the “basin” shape the authors are getting through weight perturbation and no controlled study attempts to remove these confounders. For example, weight perturbation could just be generating gibberish text which would result in 100% ASR, or unrelated text, or text that is safe but otherwise doesnt’ use refusal keywords.
In order to recommend acceptance I need to see a few things:
(1) Measure a few (maybe 2-3) other capabilities like Mathematical reasoning for Random 1/2D perurbation - If there is a similar basin for all of these under perturbation then I think the paper will need to be rejected since we are just observing that perturbation ruins models which is obvious.
(2) Use a text fluency measure like perplexity in these perturbed regions.
(3) Use a few alternatives to ASR keyword measure (for all analysis such as for computing VISAGE) - In particular use a harmfulness measure like LLamaGuard which was designed for this purpose.
(4) Show a qualitative demonstrate of the text the model generates in each region (random selection of N samples - not cherry picked)
If the authors can provide these controls in the paper I would be willing to raise my scores since I would be convinced that safety basins do indeed exist and are an important phenomena to raise. I do acknowledge their observation on lines 187-188 but its an observation without experimental demonstration.
However there is another issue with the paper that would also limit me raising my score, safety is only evaluated using one dimension of safety: harmful question answering. I would encourage the authors to try to find safety basins in other cases of safety such as toxic content generation, phishing, bias, cybersecurity and weapons development. Without this, I am concerned the safety basin finding would only be limited to harmful question answering.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you clarify why Fig. 1 (B), section 4.3 is a novel finding? I might be missing something but since the hight of the basin is the ASR and harmful fine-tuning increases ASR, isn’t this what we would naturally expect? Is the innovation the plotting interpolated models on a direction to see when the ASR raises? Or maybe its due to the radius of the allowed weight perturbations? Any clarity here would be appreciated.
Section 4.4: What were your findings on the size of the basin training with 100-shot safe alone? I think without this experimental result Table 1 is not properly controlled experimentally. (Ideally 100 random samples that are neither safe nor unsafe would provide an even better additional control!)
## Suggestions and Comments
2: adding “are” here is not grammatically correct
3: what does it refer to? safety alignment, not clear
13: The LLM Safety landscape
22: Did you mean to cite this paper for rejection sampling? or [25] instead?
79-80: I don’t think its correct to say advanced capabilites are attributed to safety alignment - I think the consensus is usually the opposite, that capabilities and aligment are largely orthogonal and that alignment imposes a “tax” (https://arxiv.org/abs/2112.00861) on capabilities.
86: Recent work has shown
106-107: Aren’t prompts a set of tokens and tokens what comprises prompts? I don’t think this distinction is clear to me. Perhaps Human prompt strategies versus optimization strategies is clearer.
127: suggestion - make it clear that i indexes each layer.
150: Since a recent user study
153: Tempature 0 is greedy decoding - https://arxiv.org/pdf/1811.02549
Figure 2 and others - Please label these access so its clear that these are the perturbation values.
1987-188: Provide this analysis in the appendix.
193-195: Provide this analysis in the appendix.
223: Can successfully measure
238-239: I don’t agree with this statement, the distribution of harmful samples used for evaluation and fine-tuning are very similar both drawn from the harmful question answering task. In order for it to be task-agnostic, you’d have to show the evaluation works across different types of unsafe distributions like toxic content generation, weapons development, bias&fairness.
277: System prompt design
Table 2: why were these not applicable? It would be nice to ihghlight the highest scores for clarity of reading.
289: space missing
325: safegaurds
326: are still effetive
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I don’t think the authors provide adequate discussion of the limitations of either their perturbation tools, their VISAGE measure, or their experimental design. I have provided some suggestions above but some additional food for thought on the tools and measure are: What are the limitations of only selecting 1 or 2 dimensions? What geometry is being assumed for VISAGE? (i.e. can we find small perturbations within these norms that are sharp transitions to unsafe behaviour but everywhere else a wide flat safety basin?) Is that assumption justifiable?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad the reviewer finds our paper novel and considers the safety basin a very important tool for evaluating LLM safety finetuning. We also thank the reviewer for their constructive suggestions. We hope the following clarifications address the reviewer’s concerns:
1. **Measure a few other capabilities like mathematical reasoning for random perurbation.**
We conducted additional experiments to evaluate on three datasets covering capabilities in math, history, and policy from MMLU. The shape of the LLM capability landscape is drastically different from the one in the LLM safety landscape; these landscapes do not exhibit the same trend, further highlighting our research’s novelty, and confirming the basin shape is indeed unique to the safety of LLM. Please check General Response -> Concern 2 for more details.
2. **Use a text fluency measure like perplexity in these perturbed regions and show qualitative demonstrations of the generated text.**
We have conducted additional quantitative and qualitative experiments, which show that LLMs speak fluently even when ASR is high. We measured results quantitatively by using the perplexity on MTBench, and qualitatively by listing generated responses sampled along these directions. Please check General Response -> Concern 3 for more details.
3. **Different refusal evaluation methods other than keyword search.**
We have expanded our experiments by testing an additional evaluation metric, Llama Guard 2. Our results demonstrate that the LLM safety basins exist regardless of the harmfulness evaluation metrics. Please check General Response -> Concern 1 for more details.
4. **Different safety datasets other than the AdvBench.**
We have expanded our experiments by testing another safety dataset, policy-oriented safety evaluation (POSE) benchmark. This benchmark goes beyond harmful question-answering and includes unsafe instructions regarding hate/harness/violence, physical harm, malware, political campaigning, tailored financial advice, etc. Our results demonstrate that the LLM safety basins exist regardless of the and safety datasets. Please check General Response -> Concern 1 for more details.
5. **Remove the discussions on loss landscape in the motivation.**
We want to emphasize that our paper strictly follows the definition of loss landscape in the original paper [1], defined as an empirical loss function (averaged over a set of data samples) of a neural network with low-dimensinal perturbations on the model weights for visualization. Our safety basin analysis falls within the loss landscape analysis by considering a binary 0-1 loss function defining the attack success of each attack query (safety keyword detection and Llama Guard 2). Following the reviewer’s comment, we will tighten the connection of our loss function to the loss landscape analysis in our revised version.
6. **In Table 2, why are certain system prompts not applicable to one LLM?**
For Llama3, it’s because there is no default system prompt in the initial release, so we leave the default system prompt of Llama3 blank. For all other LLMs in the “safety” column, it’s because we are using the optimized safety prompts specific to each LLM from “On Prompt-Driven Safeguarding for Large Language Models” Appendix L. In the provided safety prompts, only Mistral overlaps with our research and there are no safety prompts provided for Llama2, Llama3, and Vicuna. We will make it clear in the paper why these are not applicable.
7. **Why Fig. 1B and section 4.3 is a novel finding?**
Thanks for providing an opportunity for us to explain the novelty of our findings in these sections. The model’s performance **at the origin** is known from the literature, and we know for sure, that the model performance will degrade after adding malicious finetuning. However, how the model evolves from the origin to the breaking point is an important topic that is less studied in today’s LLM safety research. Is it a linear interpolation between the origin and the breaking point or is it maintaining the performance and making a sudden change? Our additional experiments in General Response -> Concern 2 exactly show that the shape of the LLM capability landscape is drastically different from the one in the LLM safety landscape; these landscapes do not exhibit the same trend, further highlighting our research’s novelty, confirming the basin shape is indeed unique to the safety of LLM. The gradual changes in the capability landscape align more with the common expectations, but the significantly different shape of the safety landscape is surprising and informative to future defenses!
8. **Additional results on finetuning with 100-shot safe dataset.**
We provide additional results comparing finetuning with a 100-shot safe dataset to finetuning with a mixture of 100-shot unsafe and 100-shot safe datasets. The results show that finetuning with the safe dataset is more robust than finetuning with the mixture of both safe and unsafe datasets, and significantly more robust than malicious finetuning.
| Model | VISAGE | AdvBench Samples | 100-shot unsafe | 100-shot safe + 100-shot unsafe | 100-shot safe |
| :-- | --: | --: | --: | --: | --: |
| Llama2-7B-chat | 85.32 | 520 | 95.4 | 0.2 | 0.1 |
| Vicuna-7B-v1.5 | 73.26 | 520 | 96.7 | 1.2 | 1.0 |
We sincerely thank the reviewers for thoroughly going through the paper and providing detailed and useful comments to help improve it! These editing suggestions and grammar issues will be addressed in the revised version.
[1] Visualizing the Loss Landscape of Neural Nets
---
Rebuttal Comment 1.1:
Title: Thank you for your great revisions
Comment: I want to thank the authors for their efforts for further revisions especially in a short time period the efforts are very much appreciated.
I think that all of my concerns have been thoroughly answered and I have raised my scores accordingly. I also want to apologize for not considering 0-1 loss as a loss landscape measure, that was an oversight on my part.
I want to emphasize that I think this is an important finding given the sharp drop-off of safety under perturbation versus capabilites. I would have raised my scores higher if there was a more comprehensive analysis of different capabilities under perturbation than MMLU and I could confirm that the paper has been revised to clearly state what is unique about safety, not that it is a basin - since capabilities "appear" to be basins as well, but that it is a basin with much sharper curvature. The sharp curvature providing excellent evidence that safety gaurding mechanisms appears to be much more brittle than capability degredation.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for the positive feedback and for raising the score! We are glad that we have addressed all your concerns. We have also conducted additional experiments on the MT-Bench capability landscape. Following the official MT-Bench evaluation repo, we reported the capability scores evaluated by GPT-4, with scores on a scale of 10.
Our findings show that the shape of the LLM capability landscape differs significantly from that of the LLM safety landscape (Fig. E in the code snippet), indicating that these landscapes do not exhibit the same trends. This distinction underscores the novelty of our research, confirming that the basin shape is indeed unique to LLM safety.
Since we are not allowed to attach external links to figures during the discussion period, per the rebuttal instructions, we have included the following code snippet for plotting the MT-Bench capability landscape mentioned above. We will include a link to the figure if the AC permits.
```
import matplotlib.pyplot as plt
x = [-0.5, -0.45, -0.4, -0.35, -0.3, -0.25, -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5]
y = [0.25, 0.71, 1.57, 2.67, 2.57, 3.29, 4.88, 5.19, 5.73, 6.08, 6.23, 5.98, 5.58, 4.59, 3.81, 2.73, 2.42, 2.27, 1.21, 0.41, 0.32]
plt.plot(x, y)
plt.title('Fig. E: Llama2-7B-chat capability landscape on MT-bench')
plt.xlabel("Perturbation Amount")
plt.ylabel("GPT-4 score")
plt.show()
```
Thank you again, and we are glad we have addressed all your concerns! | Summary: This paper aims to measure the LLM’s robustness against fine-tuning attacks by introducing the concept of “safety basin”. A new metric, VISAGE score, is proposed to measure the risk in fine-tuning without the need to actually fine-tune the LLM using a harmful dataset. The experiments demonstrate the proposed VISAGE score has a positive correlation with the robustness of LLMs against fine-tuning attacks.
Strengths: 1. This paper explains the success of fine-tuning attacks by navigating the LLM safety landscape.
2. The writing is clear and easy to follow.
3. The experiments demonstrate the proposed VISAGE score has a positive correlation with the robustness of LLMs against fine-tuning attacks.
Weaknesses: 1. The evaluation metric ASR used in Section 3 is not rigorous. Since ASR only captures refusal words, the increase in ASR may be because of the model’s decrease in utility, i.e., output random content after adding too much noise to the model weight. In this case, the ASR metric can also reach 100%. Therefore, I doubt the experimental results and the corresponding conclusions drawn in Section 3.
2. The conclusions drawn from the experiments in Sections 4 and 5 are not surprising. Specifically, the conclusions from Section 4.2 and Section 5 are already well-known [1][2]. In addition, since fine-tuning will compromise safety, it is natural that during the fine-tuning process, there is a gradual increase in ASR, which will form a “basin-like” shape in Figure 1 (b).
3. The experimental results are inadequate to support the claims. See Questions 1 and 3 for details.
[1] Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
[2] Defending ChatGPT against Jailbreak Attack via Self-Reminder
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Is the basin in ASR really a safety basin? Or it is just the utility basin? Providing example outputs when the noise is large as well as the utility benchmark results (e.g., MT bench or Alpaca Eval 2, or just simply PPL) could be more convincing.
2. Why there is a safety basin for Vicuna? Vicuna doesn’t have a safety alignment.
3. Are there results of models other than Llama-2-7B-chat and Vicuna-7B-v1.5 in Table 1? Comparing Llama2 and Vicuna only for showing the high correlation between VISAGE and model safety might not be statistically significant.
4. What is the chat template for the Llama2-7B model? To my knowledge, there is no chat template for the base model, and the base model may not be used for chatting, except using URIAL.
5. Is there any comparison of costs between measuring model safety using VISAGE and performing fine-tuning attacks directly? Measuring the safety basin using VISAGE seems also computationally expensive.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Seems that the authors didn't mention limitations in the paper. There is no potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the constructive suggestions, and we hope the following clarifications can address the reviewer's concerns:
1. **Does the model still generate fluent output when ASR is high?**
We have conducted additional quantitative and qualitative experiments, which show that LLMs speak fluently even when ASR is high. We measured results quantitatively by using the perplexity on MTBench, and qualitatively by listing generated responses sampled along these directions. Please check General Response -> Concern 3 for more details.
2. **The conclusions drawn from Sections 4 and 5 are not surprising.**
Thanks for providing an opportunity for us to explain the novelty of our findings in these sections. The model’s performance **at the origin** is known from the literature, and we know for sure, that the model performance will degrade after adding large perturbations at a certain point. However, how the model evolves from the origin to the breaking point is an important topic that is less studied in today’s LLM safety research. Is it a linear interpolation between the origin and the breaking point or is it maintaining the performance and making a sudden change? Our additional experiments in General Response -> Concern 2 exactly show that the shape of the LLM capability landscape is drastically different from the one in the LLM safety landscape; these landscapes do not exhibit the same trend, further highlighting our research’s novelty, confirming the basin shape is indeed unique to the safety of LLM. The gradual changes in the capability landscape align more with the common expectations, but the significantly different shape of the safety landscape is surprising and informative to future defenses!
3. **Safety basin for Vicuna.**
Vicuna is finetunined from a Llama base model using user-shared conversations gathered from ShareGPT.com, a website where users can share their ChatGPT conversations. ShareGPT contains numerous portions of instruction tuning data related to safety. These data include supervised responsible answers like “I'm sorry, I cannot provide information on harmful or illegal activities” and “I am not able to provide personal opinions or evaluations of individuals.” Finetuning on such datasets is essentially the same as safety supervised finetuning (SFT), which is an essential step in most LLM’s safety alignment, e.g., Llama2 and Llama3. However, Vicuna is not further finetuned with DPO, RS, or other RL methods, thus making the model less robust than Llama models (lower VISAGE score), but it still possesses safety to a certain extent.
4. **Chat template of Llama2-7B base model.**
The chat template in Fig. 2a in the paper is a general term defining the format of the conversation used in the evaluation. While interpolating the model weights between the base and the chat model, we need to ensure the chat format remains consistent. However, the base and the chat model use different chat formats, thus we ablate on both chat formats. Our results in Fig. 2a show that the chat model exhibits higher safety than the base model as expected. However, the model also shows a drastic increase in safety while using the Llama2-7b-chat chat template. We will add the above clarifications in the final verison.
5. **Computation time of the VISAGE score.**
We use a single node A100 for safety landscape computation, and it takes ~12min to plot the landscape along a 1D random direction. Meanwhile, finetuning on 100-shot samples for 5 epochs takes ~16min under the same hardware configurations. Note that finetuning requires a model to finetune on a set of representative downstream tasks (significantly larger than 100-shot samples) and evaluate its safety, while our VISAGE definition does not make assumptions on the downstream finetuning dataset, serving as a task-agnostic safety metric that measures finetuning risks, which are also verified by our additional results on Llama Guard 2 and POSEBench in General Response. We also hope that our research can inspire future work that further accelerates the computation time of our new metric.
6. **Finetuning results other than Llama-2-7B-chat and Vicuna-7B-v1.5.**
We finetune Llama3-8B-instruct with the default safety system prompt from Llama2. Our VISAGE score clearly indicates model’s safety after malicious finetuning.
| Model | VISAGE | AdvBench Samples | Aligned | 10-shot | 50-shot | 100-shot |
| :-- | --: | --: | --: | --: | --: | --: |
| Llama3-8B-instruct | 90.40 | 80 | 0 | 87.5 | 90 | 98.8 |
| Llama2-7B-chat | 85.32 | 80 | 0 | 90.0 | 91.3 | 100.0 |
| Vicuna-7B-v1.5 | 73.26 | 80 | 5.0 | 95.0 | 97.5 | 100.0 |
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed responses
Comment: Thank you for the detailed responses! Most of my concerns are well addressed. However, I still have two questions:
1. Since you are using MT-bench questions, could you also report MT-bench scores so it can be more comprehensive?
2. I still don't quite get the chat template you are using for the base model. Can you please specify it by giving the exact chat template you are using?
Looking forward to your response!
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We are pleased to hear that we have addressed most of your concerns. Below, we provide clarifications to the two additional questions you raised:
1. We followed the official MT-Bench evaluation repo and reported the capability scores evaluated by GPT-4, with scores on a scale of 10. Our findings show that the shape of the LLM capability landscape differs significantly from that of the LLM safety landscape (Fig. E in the code snippet), indicating that these landscapes do not exhibit the same trends. This distinction underscores the novelty of our research, confirming that the basin shape is indeed unique to LLM safety. As we are not allowed to attach external links to figures during the discussion period per the rebuttal instructions, we have included the following code snippet for plotting the MT-Bench capability landscape mentioned above (we will include a link to the figure if AC permits):
```
import matplotlib.pyplot as plt
x = [-0.5, -0.45, -0.4, -0.35, -0.3, -0.25, -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5]
y = [0.25, 0.71, 1.57, 2.67, 2.57, 3.29, 4.88, 5.19, 5.73, 6.08, 6.23, 5.98, 5.58, 4.59, 3.81, 2.73, 2.42, 2.27, 1.21, 0.41, 0.32]
plt.plot(x, y)
plt.title('Fig. E: Llama2-7B-chat capability landscape on MT-bench')
plt.xlabel("Perturbation Amount")
plt.ylabel("GPT-4 score")
plt.show()
```
2. You are correct that the base model doesn’t use a chat template. In the paper, the term “template” refers to a preprocessing step applied to the raw user input to ensure it aligns with the model’s chat requirements. For the base model, this means there is literally no template; it is simply text completion. We will clarify this in the revised version.
We hope our responses have fully addressed your concerns. We look forward to hearing from you and would be happy to address any remaining issues you may still have. If there are no further concerns, we kindly ask that you consider raising the score. | Summary: This paper looks at how robust/sensitive LLMs are in terms of safety training and finetuning. The authors study how robust models are by studying a "safety landscape" through perturbing the model's parameters in a random direction and evaluating the safety of the new perturbed model. They find many models exhibit a "safety" basin, or a region where the model is safe, with a sharp rise in unsafety after a certain point. The main contributions are:
- The authors propose a new metric VISAGE that studies safety by looking at the safety landscape
- They provide analysis across four open sourced models, showing a similar phenomena of a safety basin in all of them
- They further show VISAGE's usefulness by analyzing formulations like different system prompts and finetuning with unsafe data.
Strengths: The paper has many strengths
- The paper is well-written and easy to read.
- The paper provides a novel method for analyzing how sensitive the safety training of an LLM is.
- The proposed metric is practically useful and can be generally applied to any LLM to investigate how secure safety training of the LLM is
- The authors provide comprehensive analysis of popular open sourced LLMs
- The authors provide practical insights using their metric, such as the effect of system prompts on safety as well as how finetuning effects the safety of LLMs and even a new direction to looking into of potentially thwarting jailbreaks.
- I can see this as a potentially useful metric to look at in the future when aligning LLMs: Wide safety basins are more preferable than narrow ones.
Weaknesses: The main limitation of potential use of the method proposed in the paper is it seems this metric could be computationally intensive to compute, especially for really large models: Computing the VISAGE score requires approximating an expectation (Equation 5) over "average safety margin of all models we have sampled along all random directions" the authors mention they found that 3 random directions was enough, however it seems it is still necessary to sample many values of multipliers alpha and beta, applying the perturbation and evaluating the new perturbed model.
There is also a minor limitation of the refusal evaluation method used in all results (keyword search) not being the most accurate as the authors mentioned.
Technical Quality: 3
Clarity: 4
Questions for Authors: I was curious about the following:
- VISAGE mainly looks at the average safety margin, I wonder if it could be possible/useful to divide this into a few more submetrics such as basin width (wider basins seem like they are more "robust), basin depth, and how smooth/bumpy the basin is.
- Could be interesting to see in a future work if there is any effect of Model size (ex 7B models vs 13B models)
- Could also be interesting in the future work to see if there is a similar basin for capability evals like MMLU
- The authors mention on line 70 "A naive defense method [to jailbreaks] is to perturb the model weights before generating the response. ", it could be good to see how perturbations affect capabilities before consider a defense along these lines.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Generally I think this work shouldn't have any negative societal impacts - it could be possible that bad actors use the method to find effective jailbreaks on open-sourced models, but at that point, they could also through less effort just finetune the models to output harmful content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for acknowledging the significance of our findings and contributions! We especially appreciate that the reviewers find our metric practically useful and can be applied to future LLM safety training analysis. We hope the following clarifications can address the reviewer's concerns.
1. **The effect of model size on safety basin (7B vs 13B models).**
We expand our experiment by scaling up the model size from Llama2-7B-chat to Llama2-13B-chat. Fig. D (PDF in General Response) plots the 1D safety landscape of both models. Interestingly, a larger model size exhibits a wider safety basin, which also echoes the reviewer’s point of view that a wider basin seems to be more robust and a potential training goal for future LLM training.
2. **LLM’s capability landscape & How perturbations affect capabilities.**
We conducted additional experiments to evaluate on three datasets covering capabilities in math, history, and policy from MMLU. The shape of the LLM capability landscape is drastically different from the one in the LLM safety landscape; these landscapes do not exhibit the same trend, further highlighting our research’s novelty, and confirming the basin shape is indeed unique to the safety of LLM. Please check General Response -> Concern 2 for more details.
3. **Different refusal evaluation methods other than keyword search.**
We have expanded our experiments by testing an additional evaluation metric, Llama Guard 2. Our results demonstrate that the LLM safety basins exist regardless of the harmfulness evaluation metrics. Please check General Response -> Concern 1 for more details.
4. **Computation time of the VISAGE score.**
We use a single node A100 for safety landscape computation, and it takes ~12min to plot the landscape along a 1D random direction. Meanwhile, finetuning on 100-shot samples for 5 epochs takes ~16min under the same hardware configurations. Note that finetuning requires a model to finetune on a set of representative downstream tasks and evaluate its safety, while our VISAGE definition does not make assumptions on the downstream finetuning dataset, serving as a task-agnostic safety metric that measures finetuning risks, which are also verified by our additional results on Llama Guard 2 and POSEBench in General Response -> Concern 1. We also hope that our research can inspire future work that further accelerates the computation time of our new metric.
5. **More sub-metrics such as basin width, depth, and smoothness in future work.**
Thanks for the great suggestion! Our VISAGE score is defined as the average safety margin of all models sampled along all random directions, which can be thought of as the average depth in the safety basin. The VISAGE score is a byproduct of our novel findings on safety basin and we hope our study can inspire more future research on proposing more metrics including width and smoothness.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response!
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments and questions! We are grateful for your engagement in the rebuttal process. We will add those additional experiments and clarifications in the final version. | Rebuttal 1:
Rebuttal: # General Response
We sincerely thank all reviewers for their thoughtful feedback. We are excited that they highlight the strengths of our paper:
- **Safety basin, a new phenomenon observed universally in popular open-source LLMs, contributes significantly to the AI safety community and beyond.** Our notion of the “safety basin” is a very interesting and novel concept for evaluating LLM’s model robustness (h1px, v82d, jxYX). The comprehensive analysis of popular open-source LLMs (h1px) offers the AI safety community a new set of tools for visualizing the safety impact of model weight perturbations (v82d) and opens up future directions of how to improve LLM’s safety alignment by making the safety basin larger (h1px, v82d).
- **Practically useful and valuable new VISAGE metric for future LLM alignment research.** Our novel VISAGE metric probes into the safety basin and measures the safety of an LLM’s local region in model parameter spaces (jxYX, h1px). VISAGE shows a clear demonstration of its value in determining how robust a model is against malicious finetuning (jxYX, KrSC), which may become a very important training dynamic analysis tool w.r.t LLM safety (v82d). The metric is practically useful and can be generally applied to any LLM to investigate how secure the safety training is (h1px).
- **Our safety basin research provides new insights on the design of LLM system prompts and jailbreaking attacks and defenses.** The safety basin highlights the system prompt’s critical role in protecting a model, and that such protection transfers to its perturbed variants within the safety basin (h1px, v82d). In terms of jailbreaking attacks, we empirically show that weight perturbation may serve as a defense against GCG and PAIR attacks (h1px, v82d).
Finally, we appreciate that reviewers find our paper well-written and easy to follow (h1px, KrSC, jxYX).
Below we address several concerns that reviewers have shared:
### Concern 1: Does the finding of the safety basin generalize to other evaluation metrics and safety datasets? (v82d, KrSC)
**We have expanded our experiments based on the reviewers’ suggestions to test an additional evaluation metric, Llama Guard 2 [1], and another safety dataset, policy-oriented safety evaluation (POSE) benchmark [2]. Our results demonstrate that the LLM safety basins exist regardless of the harmfulness evaluation metrics and safety datasets.**
**Harmfulness evaluation metrics.** We replace the safety keyword detection with Llama Guard 2 to evaluate whether the generated output is safe or not. Llama Guard 2 is an 8B parameter Llama3-based LLM safeguard model. It classifies content as safe or unsafe, and if unsafe, it also lists the content categories violated. As shown in Fig. A, Llama Guard 2 evaluation also shows a basin shape similar to the safety keyword detection.
**Safety dataset.** POSE benchmark is constructed based on the exhaustive lists of 11 prohibited use cases found in Meta’s Llama-2 usage policy and OpenAI’s usage policy. We evaluate the generated outputs using both safety keyword detection and Llama Guard 2. Fig. B clearly shows that on the new dataset, both evaluation metrics show a similar basin shape.
### Concern 2: what does the capability landscape look like? Is it the same as the safety landscape? (h1px, KrSC, v82d)
**We conducted additional experiments to evaluate on three datasets covering capabilities in math, history, and policy from MMLU [3], as suggested by h1px and v82d. The shape of the LLM capability landscape is drastically different from the one in the LLM safety landscape; these landscapes do not exhibit the same trend, further highlighting our research’s novelty, confirming the basin shape is indeed unique to the safety of LLM.**
We evaluate capabilities using the following three datasets from MMLU: abstract_algebra, high_school_us_history, and us_foreign_policy datasets. Fig. C presents the results of perturbing the Llama2-7B-chat weights along a 1D random direction. For controlled comparisons, all datasets are evaluated along the same random direction. We observe that the shape of the capability score varies significantly across different datasets. For example, in the abstract_algebra dataset, the model also peaks at $\alpha$ (x-axis) = 0.2, while in the us_foreign_policy dataset, the model achieves slightly better performance at $\alpha$= 0.15. In contrast, randomly perturbing model weights maintains the safety level of the original aligned model in its local neighborhood, showing a rapid decrease in safety at the brim of the basin. Such drastic changes are not observed in the capability landscape. The gradual changes in the capability landscape align more with the common expectations, but the significantly different shape of the safety landscape is surprising!
### Concern 3: Does the model still generate fluent output when ASR is high? (KrSC, v82d)
**We have conducted additional quantitative and qualitative experiments, which show that LLMs speak fluently even when ASR is high.** We measured results quantitatively by using the perplexity on MTBench, and qualitatively by listing generated responses sampled along these directions. We evaluate the perplexity of the perturbed Llama2-7B-chat model along a random direction using all 80 prompts from MTBench [4]. Fig. D demonstrates that the model maintains high fluency (low perplexity), even when ASR is high, except at the extremes (abs($\alpha$) > 0.4). Table A shows five responses sampled equally along the random direction ($\alpha$ = -0.5, -0.25, 0, 0.25, 0.5). At $\alpha$ = -0.25, we clearly observe the model speaks fluently but fails to refuse the harmful input.
[1] Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations.
[2] Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
[3] Measuring Massive Multitask Language Understanding.
[4] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Pdf: /pdf/ddf5f909f0ca2b4a07852dd6dcd4d698c0b0eae9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network | Accept (poster) | Summary: This work introduces an innovative framework for sEMG-based gesture recognition. It leverages Spiking Neural Networks (SNNs) and introduces a Jaccard Attention mechanism and Source-Free Domain Adaptation (SSFDA) to enhance model robustness and accuracy in real-world applications.
The framework achieves high accuracy (89.26%) on a newly collected sEMG gesture dataset with different forearm postures and maintains system latency below 100ms on a CPU, meeting real-time requirements.
The novel Jaccard Attention mechanism directly computes attention on spike sequences, preserving SNNs' low-power characteristics, while SSFDA enhances model adaptation without needing source data.
This framework significantly improves the performance and efficiency of sEMG-based gesture recognition systems, demonstrating practical applicability and offering a robust solution for human-computer interaction.
Strengths: * This work proposed a Jaccard-based attention mechanism and an SNN-oriented SFDA algorithm for SNNs, which achieves high accuracy (89.26%) on a newly collected sEMG gesture dataset.
* The author focuses on the actual application experience of the algorithm in the real world and makes deployments, which is very valuable.
Weaknesses: * There is a typo in the text. In the footnote to Figure 5, the explanations for the first and second columns are reversed.
* In section 5.5, you intend to compare the benefits of using SJA, but the algorithm time and memory usage are for the entire algorithm. I think the overall time and memory usage of the entire algorithm is very important, but if you can get the time of each part of the algorithm and only compare the benefits of the attention part, this will be more in line with what section 5.5 is about.
Technical Quality: 2
Clarity: 2
Questions for Authors: I don't understand how your proposed Jaccard attention reflects 'attention' within tokens. According to the formula, tokens at the same position in Q and K are computed to get a scalar representing 'attention', which is then used to multiply the token in V at the corresponding position. I don't understand the meaning of this procedure or how it works. I admit that this kind of attention calculation is very efficient, and it would be better if the article had an intuitive explanation of why the algorithm is effective.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: see Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***To reviewer*** Thank you for your thorough review and insightful feedback on our work. We are delighted that you recognize the strengths of our proposed Jaccard-based attention mechanism, our SNN-oriented SFDA algorithm, and the high accuracy (89.26%) achieved on our newly collected sEMG gesture dataset. We are particularly pleased that you appreciate our focus on real-world application experiences and deployments, which we believe are crucial for advancing human-computer interaction.
We value your suggestions for improvement and will consider them carefully to enhance the soundness and presentation of our work. Your feedback is instrumental in guiding our future research efforts.
***About Jaccard attention reflects 'attention' within tokens and intuitive explanation.*** Thank you for your insightful question. I appreciate the opportunity to provide a more detailed explanation of how our proposed Jaccard-based attention mechanism (SJA) reflects "attention" within tokens.
In traditional attention mechanisms, the relationships between the query (Q), key (K), and value (V) matrices are established through dot product operations. These dot products are then normalized and used to weigh the values in the value matrix (V). **The essence of this process is to compute the similarity between each query vector and all key vectors**, using these similarities to allocate attention weights and ultimately obtain a weighted sum of the values.
Our SJA mechanism replaces the dot product similarity with the Jaccard similarity, which is more suited for the binary nature of Spiking Neural Network (SNN) outputs. This is particularly useful as SNN data is often sparse and binary.
The SJA mechanism is defined by the following formula:
\begin{equation}
\mathrm{SJA}\left(\mathbf{Q}, \mathbf{K}\right) = \frac{\sum_{ij}\min\left(q_{ij}, k_{ij}\right)}{\sum_{ij}\max\left(q_{ij}, k_{ij}\right) + \epsilon} \ \mathbf{V},
\end{equation} are the corresponding elements in the query and key matrices.
***Intuitive Explanation of SJA's Effectiveness:*** Imagine each token's spike train as a binary pattern representing different features or characteristics. The Jaccard similarity helps identify how much overlap (or commonality) there is between the features of the two tokens. By focusing on the overlapping features, the model can better understand the importance of these features and adjust the values (from \( V \)) accordingly, enhancing the overall representation and prediction accuracy. In detail,
1. **Utilizing Sparsity**: SNN outputs are typically sparse, and SJA leverages this sparsity by focusing on non-zero elements. Traditional dot product methods can be inefficient for sparse data as they involve numerous zero-value multiplications. SJA, however, uses element-wise minimum and maximum operations, concentrating on the non-zero elements, thus reducing computation time and energy consumption.
2. **Element-wise Similarity Measurement**: The Jaccard similarity measures the intersection ratio to the union of two sets. For binary vectors, this effectively captures the proportion of shared active elements. In our formula, $ \min(q_{ij}, k_{ij}) $ and $\max(q_{ij}, k_{ij}) $ represent the intersection and union of corresponding elements, respectively. This method directly reflects the similarity between the query and key vectors at the same positions, providing a meaningful measure of attention for sparse binary data.
3. **Improved Computational Efficiency**: Using element-wise minimum and maximum operations instead of matrix dot products, SJA significantly reduces computational complexity. Traditional attention mechanisms have a complexity of $O(n^2 \cdot d) $, whereas SJA has a complexity of $ O(b) $, where $ b $ is the number of non-zero elements. This reduction in complexity is particularly beneficial for handling large-scale sparse data, making SJA more efficient and suitable for deployment on spiking chips.
In summary, the SJA mechanism captures attention by measuring the Jaccard similarity between the query and key vectors, focusing on the significant non-zero elements, and leveraging the sparsity of SNN outputs. This approach not only reduces computational complexity but also enhances energy efficiency, making it an effective and efficient method for attention in SNNs.I hope this explanation clarifies how SJA works and why it is effective. Thank you for your attention and for providing the opportunity to elaborate on our work.
***About algorithm time and memory usage for the SJA.*** Thank you for your valuable feedback. We appreciate your suggestion to clarify the comparison of benefits in Section 5.5. We apologize for any confusion caused by our description. In Section 5.5, specifically in Figure 5, we indeed present the comparison of inference speed and RAM usage between SJA, Efficient Attention, and RAW Attention. The data shown pertains solely to the attention part of the algorithm, not the entire algorithm. Thank you for highlighting this area for improvement. We will ensure that this is more clearly communicated in the revised version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. Your response effectively addressed my questions.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful comments and for acknowledging our rebuttal. We are glad that our response effectively addressed your questions. We would kindly like to ask if, given our clarifications, there might be room for reconsideration of the initial score. We greatly appreciate your time and consideration in this matter. | Summary: This paper proposes SpGesture, a surface electromyography (sEMG) based gesture recognition framework using Spiking Neural Networks (SNNs). The main contributions include: 1) A novel Jaccard Attention SNN (JASNN) model that enhances sparse spike sequence representations by directly applying Jaccard similarity computation in SNNs. This is the first time that attention mechanisms do not alter the high energy efficiency of SNNs, ensuring that SNNs only involve 0 and 1 computations ; 2) The first introduction of Source-Free Domain Adaptation (SSFDA) in SNNs, using membrane potential as a memory list to generate pseudo-labels, improving model generalization in unlabeled environments; 3) A collected sEMG gesture dataset with different forearm postures. Experimental results show that SpGesture achieves the highest recognition accuracy of 89.26% on this dataset, with inference latency below 100ms on CPU, meeting real-time requirements
Strengths: 1. Proposes Jaccard attention designed specifically for SNNs, enhancing feature representation while maintaining SNN computational efficiency. Ablation experiments validate its effectiveness.
2. Innovatively introduces source-free domain adaptation to SNNs, leveraging the membrane potential of SNNs as a memory feature and designing a label generation method. Improves generalization performance even when target domain data is unlabeled.
3. Systematically compares the performance of SpGesture with various state-of-the-art methods in terms of accuracy, inference speed, and memory consumption. The experimental design is reasonable and the results are credible.
4. Collects and open-sources a multi-posture sEMG dataset, providing a new benchmark for research. The data collection process is described in detail.
Weaknesses: 1. The SSFDA method currently only addresses distribution shifts caused by forearm posture variations. Its applicability to other potential factors such as electrode displacement needs further validation. The future work section could discuss how to extend the method.
2. Performance evaluation on real hardware such as neuromorphic chips remains to be supplemented. The authors plan to conduct tests on self-developed chips in the future.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Does the design of JASNN consider other similarity measures? Have attempts been made to apply Jaccard attention to other types of SNNs?
2. What are the characteristics of the sEMG distribution shifts corresponding to different forearm postures observed in the dataset? Is it possible to visualize the signal differences caused by different postures?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The paper comprehensively analyzes the limitations of the work in the Limitation section: 1) Extending SSFDA to more distribution shift scenarios; 2) Evaluating the applicability of the methods on more SNN structures; 3) Evaluating performance on neuromorphic chips. Future research directions are provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***To reviewer:*** Thank you for your thorough and insightful review of our paper. We are delighted that you found our contributions noteworthy. We appreciate your recognition of the Jaccard Attention SNN model, which **enhances feature representation while maintaining computational efficiency and is validated through ablation experiments. We are also grateful for your acknowledgment of our innovative introduction of source-free domain adaptation in SNNs, leveraging membrane potential as a memory feature to improve generalization performance.** Your positive remarks on our systematic comparison of SpGesture’s performance and the collection and open-sourcing of the multi-posture sEMG dataset further encourage us. Thank you for your valuable feedback and suggestions, which will guide our future research. Below, we will specifically address your questions.
***About the design of JASNN, its consideration of other similarity measures, and the scalability of Jaccard attention.*** Thank you for this insightful question. Indeed, we did consider other similarity measures during the development of JASNN. We experimented with cosine similarity and Pearson correlation coefficient but found that Jaccard similarity performed best for binary sparse data, which aligns well with the spiking nature of SNNs. Regarding the application to other SNN types, we have successfully applied Jaccard attention to other SNN architectures, including LSNN (Long Short-Term Memory Spiking Neural Networks). We observed consistent performance improvements across different SNN types, indicating the generalizability of our approach. In the revised version, we will include these additional results to demonstrate the broader applicability of our method.
***About the characteristics of the sEMG distribution shifts corresponding to different forearm postures.*** The characteristics of the sEMG distribution shifts corresponding to different forearm postures are demonstrated in Appendix Figure 9. We used data from Posture 1 for inference on data from Postures 1, 2, and 3 and subsequently calculated the accuracy. This highlights the dataset's out-of-distribution (OOD) nature, showing the signal differences caused by different postures. From Figure 9, it can be seen that the accuracy of the same gestures significantly decreases in Posture 2 and Posture 3, further demonstrating the OOD problem.
***About future suggestions, the extension of SSFDA to other distribution shift scenarios.*** We appreciate this excellent suggestion for our future work. You are correct that our current SSFDA method primarily focuses on distribution shifts caused by forearm posture variations. We acknowledge that this is an important area for expansion. In the revised version, we will extend our discussion in the future work section to address this point. We plan to outline our strategy for adapting SSFDA to handle other sources of distribution shift, such as electrode displacement, changes in skin conditions, and variations in muscle fatigue. This expansion will involve modifying our membrane potential memory mechanism and pseudo-label generation process to account for these additional factors.
---
Rebuttal Comment 1.1:
Comment: You have resolved some of the concerns I had, and I am inclined to raise my score. However, there are still some issues that need further improvement in the final version.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for considering raising the score. I appreciate your insights and will make sure to address the remaining issues in the final version. | Summary: The paper proposes a novel attention mechanism that utilizes the Jaccard similarity to replace the traditional dot-product approach. This allows Spiking Neural Networks (SNNs) to maintain their binary characteristics (0 and 1) during the forward pass,which is very important for hardware computation. Additionally, the paper introduces a source-free domain adaptation method for SNNs, leveraging a probabilistic pseudo-labeling technique. The approach is validated on several sEMG datasets, demonstrating its innovation and potential for broader application.
Strengths: 1. Innovative Jaccard Attention: The introduction of Jaccard attention presents a computationally friendly algorithm for SNNs, which shows great potential for wide application in the field.
2. First Source-Free Domain Adaptation in SNNs: The paper proposes the first source-free domain adaptation method in the SNN domain, which enhances the domain generalization performance of SNNs.
3. Comprehensive Validation: The paper includes data collection and validation across multiple datasets, with robust results indicating reliability.
4. Open-source Code: Making the code publicly available enhances reproducibility and assists other researchers in replicating and building upon the work.
Weaknesses: 1. Selection of Probability P for Pseudo-Labeling: The paper does not clearly explain how the probability P is chosen when generating pseudo-labels.
2. Jaccard Attention for 3D Data: There is a lack of discussion on how Jaccard attention would be calculated for 3D data.
3. Domain Shift Between Different Postures: The method lacks a clear demonstration or proof of domain shift between different postures.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How is the probability P determined when generating pseudo-labels?
2. How would Jaccard attention be calculated for 3D data?
3. Can you provide evidence or a demonstration of the domain shift between different postures?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. The method has only been validated on sEMG time-series data and has not yet been extended to other modalities.
2. There is no analysis of the spatial complexity provided in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***To reviewer:*** Thank you for your insightful review of our paper. We are thrilled that you found the introduction of Jaccard Attention to be an innovative and computationally friendly algorithm for SNNs. Your recognition of our pioneering work on source-free domain adaptation in SNNs and its enhancement of domain generalization performance is highly appreciated. We are also glad that you valued our comprehensive validation across multiple datasets and the open-sourcing of our code for reproducibility. Your constructive feedback and suggestions will significantly guide our future research. We sincerely appreciate your valuable time and effort in reviewing our work.
***About how the probability $P$ is determined when generating pseudo-labels.*** Thank you for your question. The probability $P$ is determined by selecting the mode (most frequent value) among the top k membrane potentials as the high-probability pseudo-label. The probability $1-P$ involves randomly selecting a pseudo-label from the top k membrane potentials. The value of $P$ ranges between $0$ and $1$. In our validation, we searched the space from $0.1$ to $0.9$ in increments of $0.1$ to determine the optimal value.
***About extending Jaccard Attention to 3D data.*** Thank you for your insightful question regarding extending Jaccard Attention to 3D data. To address this, we have extended the original Jaccard Attention formula to handle 3-dimensional data. The original equation is:
\begin{equation}
\mathrm{SJA}\left(\mathbf{Q}, \mathbf{K}\right) = \frac{\sum_{ij}\min\left(q_{ij}, k_{ij}\right)}{\sum_{ij}\max\left(q_{ij}, k_{ij}\right) + \epsilon} \ \mathbf{V}
\end{equation}
For 3D data, we extend the indices $(i,j)$ to $(i,j,k)$ to account for the additional dimension. The extended equation is:
\begin{equation}
\mathrm{SJA}\left(\mathbf{Q}, \mathbf{K}\right) = \frac{\sum_{ijk}\min\left(q_{ijk}, k_{ijk}\right)}{\sum_{ijk}\max\left(q_{ijk}, k_{ijk}\right) + \epsilon} \ \mathbf{V}
\end{equation}
Here, $q_{ijk}$ and $k_{ijk}$ represent the elements of the 3D tensors $\mathbf{Q}$ and $\mathbf{K}$ respectively. The summations now run over all three dimensions $i$, $j$, and $k$. This extension allows us to apply the Jaccard Attention mechanism to 3-dimensional data, maintaining its computational efficiency and enhancing feature representation in SNNs.
***About demonstration of the domain shift between different postures.*** The characteristics of the sEMG distribution shifts corresponding to different forearm postures are demonstrated in Appendix Figure 9. We used data from Posture 1 for inference on data from Postures 1, 2, and 3 and subsequently calculated the accuracy. This highlights the dataset's out-of-distribution (OOD) nature, showing the signal differences caused by different postures. Figure 9 shows that the accuracy of the same gestures significantly decreases in Posture 2 and Posture 3, further demonstrating the OOD problem.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the authors' great effort. I have thoroughly reviewed their response. These responses have effectively addressed my questions and have further solidified my evaluation. | null | null | Rebuttal 1:
Rebuttal: Dear Area Chair,
We would like to express our gratitude to you for your dedicated efforts and contributions and to the reviewers for their constructive feedback on our submission. We are encouraged by the positive evaluation from all reviewers. All three reviewers acknowledged the innovative aspects of our work, particularly the introduction of the Jaccard Attention mechanism for Spiking Neural Networks (SNNs) and the novel Source-Free Domain Adaptation (SSFDA) method. The reviewers highlighted our approach's practical applicability and robustness, as evidenced by its validation on multiple sEMG datasets and its potential for real-world applications.
### Key Strengths Recognized:
1. ***Innovative Jaccard Attention***:
• Reviewers appreciated the computational efficiency and the preservation of the binary characteristics (0 and 1) in SNNs, which is crucial for hardware computation.
2. ***Source-Free Domain Adaptation***:
• The introduction of SSFDA in the SNN domain was noted as a significant contribution, enhancing the domain generalization performance without needing source data.
3. ***Comprehensive Validation***:
• Our method’s validation across multiple datasets and the robust results were commended, indicating the reliability and potential of our approach.
4. ***Open-Source Contribution***:
• The availability of our code for public use was highlighted as a positive step towards reproducibility and further research in this area.
### Areas for Improvement and Future Work:
1. ***Jaccard Attention for 3D Data***:
• We plan to extend our discussion on how Jaccard attention can be calculated and applied to 3D data in future iterations of our work.
2. ***Performance on Neuromorphic Chips***:
• We acknowledge the need for performance evaluation on real hardware, such as neuromorphic chips, and plan to conduct these tests in future research.
3. ***Additional Similarity Measures***:
• We will explore other similarity measures and their applicability to different types of SNNs, as suggested by the reviewers.
4. ***Visualization of sEMG Distribution Shifts***:
• We have included relevant visualizations of the signal differences caused by different postures in the appendix to understand the sEMG distribution shifts better.
We are pleased that the reviewers found our work to be technically solid and of significant impact. We are committed to addressing the identified weaknesses and incorporating the suggested improvements in our future research. We believe these enhancements will further strengthen our contributions and the overall quality of our work.
Thank you for considering our submission. We look forward to your decision and are excited about the potential impact of our research in the field of SNNs and gesture recognition. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sample Selection via Contrastive Fragmentation for Noisy Label Regression | Accept (poster) | Summary: This paper targets the noisy label regression problem. Inspired by the classification loss which helps get good representation, they first propose to discretise continuous label space into pieces which thus divides data into disjoint fragments. Then they look at the F/2 maximally contrasting fragment pairs on which the binary classifiers are built. The main learning objective is MoE where a neighborhood agreement is applied to enhance the decision consistency. And the representation and prediction are used to select clean samples and then train the regressor.
Strengths: 1. The presentation is overall good and the idea of constructing maximally contrasting pairs is interesting.
2. The fundamental works in regression have been discussed and many of them are compared detailedly.
3. I think MoE is also a good choice to integrate all classification learners.
Weaknesses: 1. Some illustration is not clear for me.
2. Some baselines which takes the order information should be discussed or compared, because binary classification ignores such information which however was thought important in previous regression works.
3.The technical contributions and novelty are not well highlighted in the main body of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. When illustrating the motivation of contrastive fragment paring, you pointed out the advantages could be better generalization and robustness. Regarding the generalization, were you referring to its performance on clean-label data training? Then is Fig. 2(c) on noisy or clean data? I felt confused when I read this part.
2. It seems that the proposed approach to construct F/2 maximally contrasting pairs of fragments ignored the order relationship between fragments that has been thought crucial to regression tasks [Yang et al., 2022b]. Can you provide some evidence if you do not think so?
3. The ratio of noise labels is important according to literature. Since the proposed method does not mention it in methodology, were you suggesting that the proposed method could be noisy ratio unaware and applicable to any level of noisy ratio?
4. Clean samples selection strategy from line 185-188 looks intuitive. You can certainly consider two views somehow complementary when necessary, but your methodology did not touch the representation learning.
5. Zhang et al., 2023 in line 91 on page 3 seems a strong baseline, which compute distance on representation. May I know why it is not included?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: My major concern is that the proposed pair construction strategy is interesting but not well motivated by noisy label topic, which makes me doubt which components contribute much to this research problem. Also the connection with recent work lack deep discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **[Limitations, Q2]. Pair construction strategy is interesting but not well motivated by the noisy label topic casting concerns regarding contribution. The maximally contrasting pairs of fragments ignores order relation.**
We would like to clarify that ConFrag is strongly motivated by the topic of noisy label regression.
**Focus on noisy label regression.** A method tailored for noisy regression problems must primarily focus on detecting noise with high severity, rather than treating all noise as equal. The components of our method such as maximally contrasting fragment pairing and fragment prior, which are based on the distance in continuous and ordered label space, are designed to prioritize filtering out such high severity noise.
**Order Relation.** Our framework leverages ordinal relations by training on contrastive fragmented pairs to learn robust features. These features are then aggregated and ordered. Clean sample selection is done through the mixture of neighboring fragments which ensures the integrity of the learned order relations.
**W2. Discuss or compare baselines considering ordinality since binary classification ignores ordinality which however was thought important in previous regression works + Limitations state that “the connection with recent work lacks deep discussion.”.**
We discuss additional works that consider order information in Appendix E.1. Furthermore, we evaluate noisy regression baselines that account for ordinality in the introduction (ln 35-42), specifically C-Mixup [1] and OrdRegr [10]. OrdRegr is a noise transition matrix based loss correction method which requires noise rate estimation. Even when provided with ground truth noise in synthetic settings, the algorithm was highly ineffective. For example, in IMDB symmetric 40% noise experiments, OrdRegr performed at least 31.86% worse than other baselines in terms of MRAE. Therefore although we have implemented both methods, we only report C-Mixup results. We will ensure that OrdRegr results are included in the manuscript as well.
To the best of our knowledge, the most recent published related work is "Robust Classification via Regression for Learning with Noisy Labels." which improves the classification by reformulating it as a regression problem. We will include this to references. If there are any other works we have overlooked, kindly let us know and we will make sure to review and include it.
Additionally, we would like to reemphasize that our framework utilizes ordinal relations by training binary classifiers on contrastive fragmented pairs to develop robust features. These features are then aggregated and ordered. Clean sample selection is performed through the mixture of neighboring fragments, ensuring the preservation of the learned order relations.
**Q1. You pointed out the advantages could be better generalization and robustness. Regarding the generalization, were you referring to its performance on clean-label data training? Then is Fig. 2(c) on noisy or clean data?**
Generalization in the motivation of contrastive fragment pairing refers to the generalization of feature extractors trained on noisy datasets, as obtaining robust and generalizable features is crucial for sample selection in noisy label settings. Fig. 2(c) shows that under symmetric 40% label noise, training expert feature extractors on contrastive fragment pairs is better for sample selection (and thus better regression performance) than training a single feature extractor on all fragment because they are less prone to overfitting and learn more generalizable features. We will also make it clear that the results in Fig. 2(c) are on noisy data.
**Q3. Is the proposed method noisy ratio unaware and applicable to any level of noisy ratio?**
The reviewer correctly noted that knowing the ratio of noisy labels beforehand is beneficial but challenging to estimate in practice. We address this point at the start of Section 2, line 86, by stating, 'ConFrag is noise rate-agnostic unlike prior methods as it operates without knowing a pre-defined noise rate.' This highlights that our method does not require prior knowledge of the noise ratio.
**Q4. Clean samples selection strategy from line 185-188 looks intuitive. You can certainly consider two views somehow complementary when necessary, but your methodology did not touch the representation learning.**
We use the term 'representation' to indicate that our fragment pair trained binary classifiers automatically learn data representations. This means they can be considered representational learners or feature extractors. However, we acknowledge that 'representation learning' often refers to techniques like self-supervised learning. We will clarify this distinction in the manuscript.
**Q5. Zhang et al., 2023 in line 91 on page 3 seems a strong baseline, which compute distance on representation. May I know why it is not included?**
OrdinalEntropy [8] proposes a regularizer that learns higher entropy features by increasing the distance between representations while preserving the ordinality via weighting the representation and target space distances. Since it doesn’t tackle the noisy regression problem directly on its own, it was not included as a baseline in the manuscript. However, it is surely something we can try to mix together with other baselines as well as our method to enhance the ordinality and to better learn the high-entropy features.
Results in Table R3 show that combining OrdinalEntropy with vanilla, baselines, and our method shows a slight drop in performance.
**W1. some unclear illustrations**
we will enhance Figure 2(a) to aid the understanding. If there are any other illustrations that require attention we would be happy to revise!
**W3. clearly state technical contributions**
We will highlight the technical contributions and novelty better in the manuscript, especially within the introduction.
---
Rebuttal Comment 1.1:
Title: Thanks for your response.
Comment: I think most of my concerns are addressed. I have updated my score. | Summary: This paper aims at addressing noisy labels in real-world regression problems and propose the ConFrag method. ConFrag transforms regression data into disjoint fragmentation pairs to enhance the distinction between clean and noisy samples. It leverages neighboring fragments and expert feature extractors to identify noisy labels through neighborhood agreement. The approach is validated through experiments on diverse datasets and introduces a new metric, Error Residual Ratio (ERR), which shows its consistent superiority over fourteen existing methods.
Strengths: 1. This method leverages the inherent orderly relationship between the label and feature space to identify noisy labels.
2. Four newly curated benchmark datasets and a metric are constructed for conducting experiments.
Weaknesses: 1. The motivation behind the idea should be stated more clearly in the introduction section. Since there is a close connection between labels and features, it is necessary to clarify why contrastive fragment pairing is introduced and whether the design of the data selection metric considers the connection.
2. Certain statements in the paper require further clarity. For instance, the meaning of "f" in "all of the noisy labeled data (f…)" needs explicit clarification upon first mention, although the meaning of the symbol can be inferred in subsequent discussions. Additionally, the phrase "we employ neighboring relations in fragments to leverage the collective information learned" in the introduction warrants elaboration.
3. It is novel to create four curated benchmarks. However, it is recommended to also validate your method on the existing noisy regression datasets used in other papers.
Technical Quality: 3
Clarity: 2
Questions for Authors: Since contrastive fragment pairing transforms some closed-set label noise into open ones, could this nature be considered when designing the data selection metric to choose open noise labels?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. The motivation behind the idea should be stated more clearly in the introduction section. Since there is a close connection between labels and features, it is necessary to clarify why contrastive fragment pairing is introduced and whether the design of the data selection metric considers the connection.**
We appreciate the feedback and suggestions! To clarify the motivation behind our idea, we will elaborate on the rationale for contrastive fragment pairing. Our method addresses the challenges of learning from noisy data by employing distinctive feature matching, which improves generalization [12,13]. Additionally, this approach helps convert closed-set noise into open-set noise, which is less harmful to the learning process [14].
We will also explain that our data selection approach, the mixture of neighboring fragments (Section 2.3), considers the correlation between labels and features that the samples with close label distances are likely to have similar features. This correlation is essential for effective fragmentation and sample selection using the mixture of neighboring fragments.
**W2. Certain statements in the paper require further clarity. For instance, the meaning of "f" in "all of the noisy labeled data (f…)" needs explicit clarification upon first mention, although the meaning of the symbol can be inferred in subsequent discussions. Additionally, the phrase "we employ neighboring relations in fragments to leverage the collective information learned" in the introduction warrants elaboration.**
We will make sure to include that $f$ means the index of the fragment that the data is assigned to upon the first mention! We will also elaborate the phrase "we employ neighboring relations in fragments to leverage the collective information learned," explaining that it utilizes a mixture model [15] to achieve probabilistic consensus in both prediction and representation spaces.
**W3. The performance on existing noisy regression datasets?**
We appreciate your recognition of our four curated benchmarks, covering age, price, and music production year prediction tasks. As suggested by the reviewer, we further evaluate the performance of our approach and baselines on two noisy regression datasets from existing noisy regression literature. The first dataset is from SuperLoss [9, 11], which injects 0.2, 0.4, 0.6, and 0.8 symmetric noise into the UTKFace dataset. The second dataset is the IMDB-org-B dataset, a real-world noise dataset studied in [16, 3, 17]. The results can be found in Table R4. We perform on par with or better than the baselines on both datasets as shown in the table below.
**Q1. Could the nature of closed-set noise transformation into open-set noise be used advantageously during the data selection metric?**
Thank you for the insightful question. As illustrated in Fig. 2(a), transforming closed-set noise into open-set noise can also be seen as converting it into anomalies or outliers. By leveraging the extensive literature on anomaly and outlier detection techniques, we could effectively handle open-set noise to enhance the data selection process.
---
Rebuttal Comment 1.1:
Title: Maintain the rating
Comment: The author has fully answered my question. I maintain my original evaluation of this paper. | Summary: This paper addresses the issue of label noise in regression tasks. The proposed method partitions the data and trains several binary classifiers for the most distant partition pairs. Noisy data samples are detected using the voted probability of all classifiers. The method outperforms other baselines on several public datasets across different domains, as measured by a newly proposed metric.
Strengths: The intuition behind the method is sound. It aims to identify samples whose y values are most misaligned with their x values, based on the weighted combined opinion of all classifiers.
Weaknesses: 1. The focus on regression tasks is limiting. Given that the proposed method transforms regression tasks into classification tasks, it could potentially be extended to address data noise in classification tasks using the same intuition.
2. The paper lacks a detailed discussion of the types of noise detected and not detected. It's unclear whether this method can detect close-set noise for individual classifiers or if it's more effective at identifying noisy samples at the center or boundaries of the fragments.
3. There is no ablation study on the amount and type of noise. The paper doesn't address whether the method is effective for all noise ratios or explore the typical types and ratios of label noise in real-life scenarios. It's unclear how the method performs with varying levels of noise (e.g., 1%, 0.5%, 5% of noise data or noise strength).
4. The paper doesn't consider scenarios where the noise degree is so small that most noisy samples cannot be transferred from closed-set to open-set noise. It's unclear whether increasing the number of fragments would be beneficial in such cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: In addition to addressing the weaknesses mentioned above, I hope the authors can answer these questions:
1. Is there a systematic way to determine the optimal number of fragments?
2. The paper states, "If x is more likely to belong to fragment 2 than 5, then it should be more likely to belong to 1 than 4 and 3 than 6." Are there limitations to this statement? Would the proposed method still work if this relationship doesn't hold?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations and broader impacts are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1. Focus on regression is limiting. ConFrag transforms regression tasks into classification tasks; it could potentially be extended to address noise in classification**
We believe that noisy regression is an important task on its own! However, most noisy label learning research focuses on classification tasks. Regression assumes a continuously ordered relationship between features and labels, a premise well-supported in existing research [5, 6, 1, 7], whereas classification is categorical rather than ordered. Our experiments demonstrate that many noisy classification approaches do not perform well in noisy regression tasks mainly due to the fundamental difference between regression and classification.
We employ cross-entropy loss (classification) as a surrogate objective to enhance the robust feature extraction, which in turn improves sample selection used to solve the regression task. This approach has been both theoretically and empirically validated thanks to better feature learning [8], as detailed in Appendix D.1. Table 2 compares regression-trained experts (ConFrag-R) and classification-trained experts (ConFrag), showing the superiority of using surrogate classification objectives.
The potential for extending our approach to classification exists, but it is out-of-scope of this work.
**W2. Detected noise analysis, specifically, close-set noise detection and noise in the center or boundaries of the fragments.**
**Close-set noise detection.** Fig R1 shows the selection rate of closed-set noise on the IMDB-Clean-B dataset with symmetric 40% noise. As training progresses, they become less likely to be selected, showing ConFrag’s ability to detect closed-set noise.
**Boundary, Centre noise.** We compare the selection rate and the error reduction rate (ERR) between the samples at the boundary and center of fragments. Table R1 shows that the average difference of the selection rates and ERR between two groups is 2.29% and 2.43%, respectively. These results confirm that ConFrag consistently performs robust sample selection regardless of the sample's position.
**[W3,4] Do additional ablation on the amount and strength of noise (with emphasis on 1%, 0.5%, 5% noise) and also evaluate on real-life noise. unclear whether increasing the number of fragments would be beneficial in extreme small noise.**
We perform thorough experiments on the diverse amount and strength of noise following previous research on noisy regression [1, 9, 10, 11]. It involves the ablation on symmetric noise levels of 20/40/60/80% and Gaussian noise with a standard deviation of 30/50%. Notably, we are the only work that tests on both symmetric and Gaussian noise! The results presented in Section 4.3 show that ConFrag is effective across all of these noise ratios and strengths.
In Table R2, we analyze ConFrag using tiny noise amounts (0.5/1/5% symmetric noise) and strengths (Gaussian 2/5%) on IMDB-Clean-B.
For noise ratios of 0.5/1/5%, ConFrag achieves a very low ERR, indicating that it can detect noisy samples regardless of the noise ratio. However, when the noise strength is very small (Gaussian 2/5%), ConFrag is not able to filter them effectively. We acknowledge this as a limitation of our method.
Since ConFrag primarily focuses on sample selection, it can be easily reinforced by other techniques during the training process to mitigate the effects of tiny noise strengths. Indeed, in the Gaussian 2/5% experiments, integrating C-Mixup or Co-teaching with our ConFrag improves the performance by 4.74% and 4.51%, respectively. Regarding the reviewer’s question on whether increasing the number of fragments can be beneficial when the noise strength is small, please refer to our answer to Q1.
Last but most importantly, we evaluate the performance of ConFrag on real-world noise, which includes many types of noise with varying strengths. Table R4 reports the results on a version of the IMDB dataset with real-world noise [3], IMDB-org-B. The results show that the vanilla version of our method performs on par with other best-performing baselines on real-world datasets. Importantly, since ConFrag only concerns the sample selection process, it can be easily integrated with other noisy label methods. By combining our approach with other techniques, ConFrag outperforms all baselines by a non-trivial margin. Demonstrating the practical effectiveness of our approach.
**Q1. A systematic way to determine the fragment number?**
Analysis in Appendix G.2 shows that fixing the fragment number $F$ to 4 yields the best performance for the IMDB-Clean-B and SHIFT15M-B datasets, as it is a non-sensitive hyperparameter to the performance. This setting was subsequently used in all our experiments. The analysis further shows that given a large enough dataset, the sensitivity to the number of fragments decreases.
Finer fragmentation can improve detection of tiny strength noise but increases the risk of overfitting due to fewer samples per task. In larger datasets, this risk is mitigated because the data size per fragment remains large enough, allowing ConFrag to benefit from finer fragmentation. Based on these insights, we recommend starting with $F=4$ and incrementally increasing it until performance deteriorates, especially for larger datasets.
**Q2. Limitations to the statement, "If x is more likely to belong to fragment 2 than 5, then it should be more likely to belong to 1 than 4 and 3 than 6."? Can it hold without this relationship?**
The statement is highly related to the fundamental characteristic of regression: the continuous and ordered correlation between the label and feature space [5,6,1,7]. Since our approach is rooted in this characteristic, its effectiveness can be compromised if it does not hold. However, we reemphasize that this is a common characteristic found in most regression tasks, and prior works on regression with imperfect data build their methods on top of this characteristic [5,6,1].
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I have updated my rating accordingly. | Summary: The paper introduces an innovative approach for the collective modeling of regression data, grounded in the idea that similar labels often correspond to shared significant features. The authors convert the data into separate, yet juxtaposed fragment pairs, employing a combination of adjacent fragments to detect noisy labels. This detection is achieved via a mutual agreement within the prediction and representation spaces.
Strengths: - The method seems technically solid and correct
- To the best of my knowledge, this paper presents a novel method that has not introduced in the past
- The paper presents a robust evaluation setup
Weaknesses: I think that the paper is overwhelmed with its presentation of too many details, which can move to the appendix to let the reader focus on the important parts. That being said, this is the author's decision, and I didn't consider it in my score.
Technical Quality: 2
Clarity: 2
Questions for Authors: - If I understand correctly, you can use the method for generative tasks as well. Did you consider it? if it is not possible, can you please elaborate?
- Did you study classification tasks? If yes, how did it perform?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W. Paper is overwhelmed with its presentation of too many details, which can move to the appendix to let up the reader focus on the important parts.**
We sincerely thank the reviewer for the suggestion. In order to balance the clarity and the detail, we will move some overly detailed procedure of contrastive fragment pairing in Section 2.1 (line 107-116) to Appendix, and make the motivation part (line 117-144) more concise. To improve clarity better, we will include a figure illustrating the overall framework of ConFrag, highlighting the online nature of the filtering and training processes.
**Q1. Extension to generative tasks**
ConFrag learns to select samples $(x, y)$ that are better aligned and use them for training regression tasks (i.e., learning $P(y|x)$). As the reviewer suggested, the selected samples can also be used for conditional generation tasks (i.e., learning $P(x|y)$).
To verify this, we train a continuous conditional GAN model (SVDL+ILI) [4] for 40k steps using (1) clean IMDB-Clean-B dataset, (2) IMDB-Clean-B with 40% symmetric noise, and (3) samples selected by ConFrag on the noisy dataset. To measure the condition alignment and the quality of generated samples, we use MAE and intra-fragment FID on 52,000 generated images. Specifically, intra-fragment FID is the average of FID values measured for images in each fragment. We use $F=4$ as done in ConFrag. For both evaluation metrics, lower is better. The results show that sample selection by ConFrag improves both the condition alignment and the quality of conditionally generated images.
| | intra-fragment FID (F=4) | MAE |
|---|:---:|:---:|
| Clean (0% noise) | 14.42 | 10.419 |
| Symmetric 40% noise | 15.998 | 13.569 |
| ConFrag | 15.244 | 10.348 |
**Q2. Extension to classification tasks**
As suggested by the reviewer, it is possible to apply our contrastive fragmentation approach to classification tasks by making the following adjustments:
1. We set each class as a fragment (i.e., the number of fragments $F$ = the number of classes)
2. We measure the distance between fragments/classes. Possible metrics include euclidean distance between the GloVe embedding, CLIP text cosine similarity of each class, or CLIP image similarity using the samples of each class. The distance metric is used for constructing contrastive pairs and defining two neighboring fragments for each fragment.
3. We redefine the fragment prior (Equation 2) using the distance between classes.
We evaluated the extended ConFrag on CIFAR-10 classification with 20/40/60% symmetric noise ratios using selection ratio and precision as metrics, where precision is defined as P(clean|selected). We found that samples selected by the modified ConFrag are cleaner than the original dataset (i.e., precision higher than 1 - noise ratio), showing the potential of extending ConFrag to classification tasks.
| Noise | Selection ratio | Precision |
|---|:---:|:---:|
| 20% | 0.8191 | 0.8871 |
| 40% | 0.7353 | 0.7359 |
| 60% | 0.6702 | 0.5243 |
However, since ConFrag is designed to leverage the characteristics of noisy regression tasks as mentioned in the general response, and thus noisy classification is out of our main focus, its performance does not reach the level of state-of-the-art sample selection approaches designed for noisy classification tasks. Additionally, we found that the choice of the distance metric has a significant effect on sample selection performance, indicating that the design of a better distance metric is necessary for successful extension of our approach to noisy classification tasks. | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their insightful comments and acknowledgment. We appreciate the recognition that our approach is technically solid and correct (Reviewer T5Bx), our presentation is commendable (Reviewer T5Bx, 2i51), and our extensive comparison of regression works is thorough (Reviewer 2i51). Additionally, we value the positive feedback on our robust evaluation setup and metrics (Reviewer T5Bx, RwYcT) and the inclusion of four additional benchmark datasets (Reviewer wYcT). We are grateful to all four reviewers for recognizing the novelty, soundness, intuition, and conceptual strength of our approach.
As a global response, we would like to highlight the following.
## Characteristics of Noisy Regression
Noisy regression problems have two distinct characteristics that distinguish them from noisy classification tasks: continuously ordered correlation between labels and features and varying degrees of noise strengths. We emphasize that these two characteristics are utilized for designing the key components of ConFrag.
**Ordered relations**
ConFrag leverages ordinal relations by training on contrastive fragmented pairs to learn robust features. These features are then aggregated and ordered. Clean sample selection is done through the mixture of neighboring fragments which ensures the integrity of the learned order relations.
**Focus on noisy label regression**
A method tailored for noisy regression problems must primarily focus on detecting noise with high severity, rather than treating all noise as equal. The components of our method such as maximally contrasting fragment pairing and fragment prior, which are based on the distance in continuous and ordered label space, are designed to prioritize filtering out such high severity noise.
## Additional Experiments
In this rebuttal, we present the results of ConFrag on two additional noisy label regression datasets suggested by Reviewer RwYcT. Please refer to W3.
**Additional Real-World Dataset**
We’d like to emphasize that ConFrag deals with the sample selection process, and thus it can be easily integrated with other noisy label methods. Our approach combined with regularization methods such as C-Mixup [1] and Co-Teaching [2] significantly outperforms all baselines on the IMDB-Org-B [3] dataset. Specifically, our method achieves improvements of 3.3% and 4.82% in Mean Relative Absolute Error (MRAE), respectively. This result demonstrates the practical effectiveness of our approach.
We attach a pdf file containing figures and tables. For figures and tables whose index starts with ‘R’, please refer to the attached file.
**Global References**
[1] Yao et al., C-mixup: Improving generalization in regression. ICML 2022
[2] Han et al., Co-teaching: Robust training of deep neural networks with extremely noisy labels. NeurIPS 2018
[3] Lin et al., FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in the Wild. TIP 2022
[4] Din et al., Continuous Conditional Generative Adversarial Networks: Novel Empirical Losses and Label Input Mechanisms. TPAMI 2023
[5] Yang, et al. Delving into deep imbalance regression. ICML 2022
[6] Gong, et al. Ranksim: Ranking similarity regularization for deep imbalanced regression. ICML 2022
[7] Zha, et al. Supervised contrastive regression. 2022
[8] Zhang, et al. Improving deep regression with ordinal entropy. ICLR 2023
[9] Castells et al., SuperLoss: A Generic Loss for Robust Curriculum Learning. NeurIPS 2020.
[10] Garg and Manwani. Robust deep ordinal regression under label noise. ACML 2020
[11] Wu et al., discrimLoss: A Universal Loss for Hard Samples and Incorrect Samples Discrimination. TMM 2024.
[12] Grønlund et al., Margin-based generalization lower bounds for boosted classifiers. NeurIPS 2019
[13] Grønlund et al., Near-tight margin-based generalization bounds for support vector machines. ICML 2020
[14] Wei et al., Open-set label noise can improve robustness against inherent label noise. NeurIPS 2021
[15] Jacobs et al., Adaptive mixtures of local experts. Neural Computation 1991
[16] Dornaika et al., Robust regression with deep CNNs for facial age estimation: An empirical study. Expert Systems with Applications 2020
[17] Zha et al., Rank-N-contrast: Learning continuous Representations for Regression. NeurIPS 2023.
Pdf: /pdf/5d3f93d4ce6d82c5ceae65b511dc9a34b7379a59.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views | Accept (poster) | Summary: A variant of horizontal FedMVC is proposed to address more realistic scenarios involving heterogeneous hybrid views. It develops specific strategies and conducts theoretical analyses from the perspective of bridging client and view gaps. The proposed method demonstrates promising experimental results on several datasets.
Strengths: 1) The scenario of heterogeneous hybrid views assumed in the paper is interesting and merits further exploration.
2) The appendices are thorough and well-organized, including theoretical proofs and additional experiments.
Weaknesses: 1) Contrastive learning strategies have been widely used in multi-view clustering methods; thus, the synergistic contrast strategy proposed in this paper may not offer significant novelty.
2) The paper dedicates considerable effort to describing how common semantics H is extracted from the raw data of each client, but it lacks an explanation of why this approach is suitable for clustering tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) The experimental results reported in the paper show a large discrepancy for DSIMVC on the MNIST-USPS dataset compared to the original results, while the differences are much smaller for the BDGP and Multi-Fashion datasets. Why does this phenomenon occur?
2) In the reported experiments, the number of clients varies across different datasets. How were these numbers chosen?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer uC6c:
Thank you for your valuable feedback.
**Q1: Contrastive learning strategies have been widely used in multi-view clustering methods; thus, the synergistic contrast strategy proposed in this paper may not offer significant novelty.**
We would like to emphasize that the local-synergistic contrast strategy comprises both feature contrastive learning and model contrastive learning, aimed at addressing the client gap and mitigating the heterogeneity between single-view clients and multi-view clients.
We acknowledge that feature contrastive learning used in multi-view clients has already been applied in some multi-view clustering methods. However, our goal is that multi-view clients can help single-view clients bridge client gaps. Thus, the innovation of this module does not lie in multi-view clients using feature contrastive learning to train local models, but rather in single-view clients using model contrastive learning to bridge client gaps and discard view-private information detrimental to clustering. By establishing a unified goal of extracting common semantics H in both single-view and multi-view clients, a communication bridge is built. Meanwhile, the extraction of common semantics helps in discovering complementary clustering structures across clients. Furthermore, model contrastive learning allows the local single-view clients to converge towards the global model while amplifying the differences between the reconstruction and consistency objectives in the local models.
**Q2: The paper dedicates considerable effort to describing how common semantics H is extracted from the raw data of each client, but it lacks an explanation of why this approach is suitable for clustering tasks.**
Thank you for your feedback. As we mentioned in our response to Q1, the extraction of common semantics H aims to unify the training objectives of single-view clients and multi-view clients. This allows single-view clients to use model contrastive learning to bridge client gaps and discard view-private information that is detrimental to clustering. We believe that focusing on common semantics, which eliminates the adverse effects of view-private information, is more conducive to discovering subsequent clustering structures. We will revise our manuscript to make this point clearer.
**Q3: The experimental results reported in the paper show a large discrepancy for DSIMVC on the MNIST-USPS dataset compared to the original results, while the differences are much smaller for the BDGP and Multi-Fashion datasets. Why does this phenomenon occur?**
Thank you for your detailed observation. We have carefully reviewed the replication code for DSIMVC and confirmed that the experimental results reported in Table 1 are accurate. In DSIMVC, incomplete samples are generated by randomly removing views under the condition that at least one view remains in the sample. In contrast, our comparison strategy considers data from multi-view clients as complete data, while data from single-view clients are regarded as incomplete data. Unlike the random removal in DSIMVC, our approach results in incomplete data that tends to have consecutive same missing views. This inconsistency in handling incomplete scenarios leads to the discrepancy between the reported results and our replicated results.
**Q4: In the reported experiments, the number of clients varies across different datasets. How were these numbers chosen?**
Figure 3 (b) and Figure 8 show that as the number of clients increases, the performance of FMCSC experiences a slight decline but remains generally stable. However, when the number of clients reaches a certain threshold, the clustering performance of FMCSC drops significantly. We believe this occurs because samples within each client become insufficient, which hinders local model training and negatively impacts clustering performance. Therefore, for different datasets, we aim to ensure that each client has more than 200 samples to maintain the stability of FMCSC during training. In the future, we will further explore high-quality training with fewer samples per client, encouraging more devices to participate in training and collaboration.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns, and I would like to maintain my previous rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer uC6c,
We sincerely appreciate your quick feedback. Your constructive comments have been instrumental in enhancing our work, and we are grateful for the attention and time you have dedicated to it.
If there are any further aspects you believe could benefit from refinement, please feel free to share your thoughts.
Best wishes,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer uC6c,
We sincerely appreciate your time and effort in reviewing our work. We would be grateful for further feedback or confirmation that our rebuttal has adequately addressed your comments.
Thank you again for your time and consideration.
Best regards,
Authors | Summary: The authors introduce a novel method called Federated Multi-view Clustering via Synergistic Contrast (FMCSC), to simultaneously leverage the single-view and multi-view data across heterogeneous clients to discover clustering structures from hybrid views. This method bridges client and view gaps through a combination of theoretical and experimental analysis to discover the cluster structures in multi-view data distributed across different clients.
Strengths: Federated multi-view clustering (FedMVC) is a recently proposed and increasingly popular research direction within the multi-view learning community. This paper addresses a novel issue in FedMVC, termed ‘heterogeneous hybrid views,’ where a mixture of both single-view and multi-view clients exhibit varying degrees of heterogeneity.
Through theoretical and experimental analysis, the paper clearly shows how the proposed method bridges client and view gaps. The proposed method performs well across different federated settings, with reproducible code.
The paper is also well-written, with clear explanations, extensive experiments, and detailed theoretical proofs.
Weaknesses: The paper does not mention the data partitioning strategy of the proposed method. The authors need to provide details on how data are distributed among different clients.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious whether the proposed method can be applied to both vertical FedMVC and horizontal FedMVC scenarios?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations and societal impact have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer jVcj:
We thank the reviewer for valuable comments and suggestions that have greatly improved our paper.
**Q1: The paper does not mention the data partitioning strategy of the proposed method. The authors need to provide details on how data are distributed among different clients.**
Thank you for pointing out this issue. The proposed method adopts the common IID partition, where multi-view data are randomly and uniformly distributed across all clients, ensuring that each client's data distribution is similar to the overall data distribution. In implementation, we achieve this by setting a large Dirichlet distribution parameter, ensuring that the proportion of different classes of data allocated to each client are nearly equal, thereby achieving the effect of independent and identically distributed data.
**Q2: I am curious whether the proposed method can be applied to both vertical FedMVC and horizontal FedMVC scenarios?**
Thank you for your insightful comments. We believe the proposed method can be applied to both vertical FedMVC and horizontal FedMVC scenarios. In the paper, we mention that existing FedMVC methods usually assume that clients are isomorphic and belong to either single-view clients or multi-view clients. In the heterogeneous hybrid views scenario applicable to FMCSC, vertical FedMVC scenarios can be viewed as having only single-view clients, with the number of clients equal to the number of views; horizontal FedMVC scenarios can be seen as having only multi-view clients. When facing horizontal FedMVC scenarios, FMCSC can be directly applied by setting the number of single-view clients to zero. Additionally, for vertical FedMVC scenarios, where only single-view clients exist, the method cannot bridge gaps with the help of multi-view clients. Therefore, the local-synergistic contrast module in Section 3.3 needs to be frozen before applying FMCSC.
The above analysis demonstrates that FMCSC, suitable for heterogeneous hybrid views scenarios, can still be applied to both vertical FedMVC and horizontal FedMVC scenarios. The heterogeneous hybrid views scenario complements and serves as an alternative to the current FedMVC assumption, making it better aligned with real-world situations.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks. The authors have addressed my concerns, and I will maintain my previous rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer jVcj,
We are truly thankful for your prompt reply. Your valuable feedback has significantly improved our manuscript, and we are grateful for your continued support in this process.
If you have any further suggestions for improvement, please let us know.
Best wishes,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer jVcj,
We sincerely appreciate your time and effort in reviewing our work. We would be grateful for further feedback or confirmation that our rebuttal has adequately addressed your comments.
Thank you again for your time and consideration.
Best regards,
Authors | Summary: This paper proposes a novel method i.e. Federated Multi-View Clustering in Heterogeneous Hybrid Views (FedCSC), which introduces a locally collaborative contrastive learning algorithm to achieve consistency between single-view and multi-view clients, thereby mitigating heterogeneity among all clients. Furthermore, a global-specific aggregation algorithm has been used to address the gaps between different views. Benchmark experiments validate the effectiveness of this approach.
Strengths: 1: The paper introduces a novel global weighted aggregation method, encouraging the global model to learn complementary features from mixed views, demonstrating a certain level of innovation.
2: The paper introduces a novel federated multi-view learning framework that considers the scenario of a mix of single-view and multi-view clients, followed by experimental analysis.
3: The paper conducts a comprehensive theoretical analysis and validation of the proposed method.
4: The code is provided for reproduction
Weaknesses: 1. The dataset size of 10,000 is insufficient; validating the proposed methods requires a significantly larger dataset to ensure robustness and broader applicability.
2. The computational complexity of global-specific weighting aggregation is estimated to be high. Therefore, it may not be suitable for large-scale data tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.How were $\alpha_m$ and $\alpha_p$ derived in Eq. 10 of this paper?
2. In the experimental results on the MNIST-USPS dataset in Figure 8, why does the performance with 24 clients outperform that of both 16 clients and 50 clients?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Given.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer s4wh:
**Q1: The dataset size of 10,000 is insufficient; validating the proposed methods requires a significantly larger dataset to ensure robustness and broader applicability.**
Thanks for the suggestion. We conduct further experiments on the large-scale YoutubeVideo dataset [1], which contains 101,499 samples across 31 classes, where each sample has three views of cuboids histogram, HOG, and vision misc. Below are the clustering results of FMCSC and several comparison methods when the number of multi-view clients and single-view clients are equal ($M/S = 1:1$):
| Method | IMVC-CBG (2022) | DSIMVC (2022) | ProImp (2023) | FedDMVC (2023) | FCUIF (2024) | FMCSC (Ours) |
| :----: | --------------- | ------------- | ------------- | -------------- | ------------ | :----------: |
| ACC | 18.32 | 15.01 | 22.45 | 21.52 | 23.04 | 26.42 |
| NMI | 11.83 | 8.11 | 17.48 | 16.96 | 18.46 | 20.74 |
| ARI | 2.04 | 1.20 | 3.43 | 3.42 | 3.72 | 5.82 |
The YoutubeVideo dataset is 10 times larger than the Multi-Fashion dataset with 10,000 samples. The results demonstrate that FMCSC adapts well to large-scale datasets and outperforms other methods, ensuring the proposed method's robustness and broader applicability.
[1] Omid Madani, Manfred Georg, and David A. Ross. On using nearly-independent feature families for high precision and confidence. Machine Learning, 92:457–477, 2013.
**Q2: The computational complexity of global-specific weighting aggregation is estimated to be high. Therefore, it may not be suitable for large-scale data tasks.**
Thank you for raising your concerns. We specifically analyze the computational complexity of global-specific weighting aggregation as follows.
Eq. (10) shows the detailed process of global-specific weighting aggregation performed by the server. During this process, the server receives $M$ local model parameters capable of handling multi-view data and $(VM + S)$ local model parameters capable of handling a single specific view. We define the number of these local model parameters as $N_m$ and $N_p$, respectively. When aggregating to obtain $f_{g}(\cdot; \mathbf{w})$, the server performs a weighted sum of the $M$ multi-view client parameters, resulting in a computational complexity of $O(N_m M)$. For $V$ models $f_{g}^v(\cdot; \mathbf{w}^{v})$, the server performs a weighted sum of the single specific view models, with a complexity of $O(VN_m M + N_p S)$. Therefore, the total computational complexity is $O(N_m M + VN_m M + N_p S)$.
Through this calculation, we observe that the computational complexity is related to the number of local model parameters and the number of participating clients. Table 5 presents the number of parameters per client for different datasets, such as 3.4M-10.1M for the Multi-Fashion dataset, corresponding to our definitions of $N_m$ and $N_p$, while the number of participating clients is typically below 100. The computational complexity derived from the above analysis is considered acceptable and confirms that our proposed method applies to large-scale data tasks, as addressed in the response to Q1. Additionally, Table 5 reports the running time of the proposed method on all datasets, e.g., 763.8s for the MNIST-USPS dataset, indicating low overall time complexity.
**Q3: How were $\alpha_m$ and $\alpha_p$ derived in Eq. 10 of this paper?**
Theorem 1 shows that optimization objectives in multi-view and single-view clients can be measured by different mutual information metrics, with proximity to these objectives reflecting model quality. Based on this, we use mutual information to evaluate the quality of models from multi-view and single-view clients. These metrics are calculated locally and sent to the server with the model parameters. The server assigns aggregation weights based on these values. Specifically, in Eq. (10), $\alpha_m$ are derived by normalizing $\sum_{v=1}^{V}I\left(\mathbf{H}, \mathbf{H}^{v}\right)$, which are calculated by $f_{m}\left(\cdot; \mathbf{w}_m\right)$ from different multi-view clients.
$\alpha_p$ are derived by normalizing $I\left(\mathbf{H}, \mathbf{H}^{g}\right) - I\left(\mathbf{H}, \mathbf{Z}^{v}\right)$, which are calculated by $f_{p}\left(\cdot; \mathbf{w}_p^{v}\right)$ from different single-view clients. Higher mutual information values indicate better model quality, leading to higher weights during aggregation and achieving high-quality global aggregation.
**Q4: In the experimental results on the MNIST-USPS dataset in Figure 8, why does the performance with 24 clients outperform that of both 16 clients and 50 clients?**
Thank you for your detailed observation. We believe that clustering performance is closely related to the number of clients, but it does not exhibit a simple linear relationship. For example, results on the BDGP dataset in Figure 8 show that the performance with 12 clients outperforms that of both 8 clients and 16 clients.
This phenomenon has two main reasons. First, a moderate increase in the number of clients promotes diversity in local models, benefiting the high-quality aggregation of the global models. Second, with a fixed number of samples, significantly increasing clients makes the data for each client sparse, negatively affecting local model training and clustering performance. The experimental results in Figure 8 highlight this issue: when the number of clients reaches a critical point, the severe insufficiency of samples within each client leads to a significant decline in FMCSC's clustering performance. This indicates that global clustering performance is highly correlated with the training quality of local models. This finding also motivates us to further explore high-quality training with few samples per client, encouraging more devices to participate in training and collaboration.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed the proposed weaknesses and questions; therefore, I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer s4wh,
Thank you for your prompt response and for reconsidering your evaluation. We genuinely appreciate your insightful feedback and the effort you have put into helping us refine our work. Your contributions have been invaluable in improving the quality of our paper.
If there is anything further you believe could be refined or enhanced, please let us know.
Best wishes,
Authors
---
Rebuttal 2:
Comment: Dear Reviewer s4wh,
We sincerely appreciate your time and effort in reviewing our work. We would be grateful for further feedback or confirmation that our rebuttal has adequately addressed your comments.
Thank you again for your time and consideration.
Best regards,
Authors | Summary: This paper proposes a novel Federated Multi-View Clustering method capable of handling heterogeneous hybrid views. By designing local-synergistic contrastive learning and global-specific weighting aggregation, the proposed method explores clustering structures across different clients. The effectiveness of the proposed method is demonstrated both theoretically and empirically.
Strengths: 1. The motivation behind the paper is clear, and the proposed heterogeneous hybrid view scenario is more applicable to real-world situations compared to other FedMVC methods.
2. The paper conducts extensive experiments, demonstrating the effectiveness of the proposed method.
Weaknesses: 1. Several key observations are presented in Section 3.2 on cross-client consensus pre-training; however, the purpose of these observations is not very clear.
2. The paper describes the transfer of multiple model parameters between the client and the server, but does not analyze the communication overhead.
3. There are some inaccuracies in the descriptions, such as "select 10 state-of-the-art methods" in Lines 239 and 578, which should be 9?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see ‘Weaknesses’.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Response to Reviewer 14Lq:
We sincerely appreciate your constructive comments and suggestions.
**Q1: Several key observations are presented in Section 3.2 on cross-client consensus pre-training; however, the purpose of these observations is not very clear.**
Thank you for your feedback. In Section 3.2, we have two key observations: (a) the presence of single-view clients exacerbates the issue of model drift; (b) the absence of uniformly labeled data across all clients leads to the reconstruction objective of autoencoders optimizing from multiple different directions, resulting in model misalignment.
For observation (a), we use the local-synergistic contrastive learning designed in Section 3.3, which helps single-view clients bridge client gaps and mitigate model drift. Theorem 2 demonstrates the effectiveness of this strategy by analyzing the generalization bounds of the proposed method. For observation (b), we propose cross-client consensus pre-training to align the local models on all clients and avoid their misalignment. Figure 2 visualizes the model outputs, further quantifying the impact of model misalignment and the effectiveness of our proposed strategy. Additionally, the ablation study results in Table 2 show that the proposed strategy plays a crucial role in our training process, facilitating consensus among clients during pre-training, effectively alleviating model misalignment, and accelerating convergence. We will revise our manuscript to make this point clearer.
**Q2: The paper describes the transfer of multiple model parameters between the client and the server, but does not analyze the communication overhead.**
Below, we use the MNIST-USPS dataset as an example to calculate the total communication overhead required by FMCSC. Table 5 reports the number of parameters per client by FMCSC. Suppose the data are distributed among 24 clients, with an equal number of multi-view and single-view clients. In this case, in each communication round, the total data volume that all clients need to transmit to the server is 485.6MB, and the data volume that the server needs to transmit to all clients is 54MB. Due to the proposed cross-client consensus pre-training strategy accelerating convergence, the communication rounds are set to 5. Therefore, the total communication overhead required to reach convergence for this dataset is calculated to be 2.6GB.
Similarly, we further calculate the total communication overheads required for FMCSC to reach convergence on other datasets. The results indicate that these overheads are acceptable, holding for both clients with powerful computing capabilities (such as large institutions) and lightweight clients (such as mobile devices).
| Dataset | MNIST-USPS | BDGP | Multi-Fashion | NUSWIDE |
| :--------------------: | :--------: | :---: | :-----------: | :-----: |
| Communication overhead | 2.6GB | 1.5GB | 6.4GB | 4.4GB |
**Q3: There are some inaccuracies in the descriptions, such as "select 10 state-of-the-art methods" in Lines 239 and 578, which should be 9?**
Thank you. We will correct the mistakes and further polish our manuscript.
---
Rebuttal 2:
Comment: Dear Reviewer 14Lq,
We sincerely appreciate your time and effort in reviewing our work. We would be grateful for further feedback or confirmation that our rebuttal has adequately addressed your comments.
Thank you again for your time and consideration.
Best regards,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nonparametric Evaluation of Noisy ICA Solutions | Accept (poster) | Summary: The authors consider the noisy ICA problem. They first propose a numerical objective function that can be used as a guide to assess the quality of an existing ICA algorithm. This function is not suitable for optimization and therefore, the authors propose to use it in a meta-algorithm, where the purpose is to select the best ICA solutions out of several, where each solution is possibly produced by a separate algorithm. Then, they propose new contrast functions that are specifically suited for the noisy ICA problem. Finally, they study critical points of contrast functions so as to validate their use in noisy ICA problems. They provide numerical experiments to back up their claims.
Strengths: The paper consider the noisy ICA problem, which is a challenging problem and is arguably more realistic than the noise-free ICA problem. The paper also demonstrates via numerical experiments that the proposed meta algorithm and the contrast functions can uncover the components in an image mixing problem.
Weaknesses: The paper touches on too many things and is not very readable. For instance, I don't see the connection between the meta algorithm and the new contrast functions. Similarly, I would have welcomed more detail on the development of the meta algorithm, and the choice of randomization in its computation.
Occasionally, something is introduced out of nowhere and it's hard to understand the motivation. For instance, I don't see the development for the new contrast functions.
Overall, I'd have preferred a paper that coherently and clearly explores a single idea. Currently, it reads like a collection of not clearly-developed ideas.
Technical Quality: 2
Clarity: 1
Questions for Authors: - line 123, "We propose to adapt the CHFICA objective using estimable parameters to the noisy ICA setting" : I don't understand this sentence. What are the "estimable parameters" here? Is it $S$?
- eqn 2 : What's the motivation for the additional terms (the second factors in both "JOINT" and "PRODUCT")?
- eqn 2 : In the last term, is a $t$ missing?
- Thm 2 : How does knowing this bound help? Please discuss how this theorem is useful in the text.
- line 175, "...in this section them, ..." : typo?
- line 214 : What is a pseudo-Euclidean space?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the challenging nature and wider applicability of the noisy ICA problem and for your kind words regarding our experiments with the Meta algorithm. All typographical and grammatical errors will be corrected in the revised manuscript and are not addressed individually here.
**[Re: Connection between the meta-algorithm and the new contrast functions]** - Most methods for noisy ICA center around designing an appropriate contrast function. Lines 87-103 in the paper, under “Individual shortcomings of different contrast functions and fitting strategies,” explain the different issues with different types of existing contrast functions. As explained in lines 104-110, this leads to two challenges: a) how do we adaptively pick an appropriate contrast function for the dataset at hand, and b) can we design better contrast functions? Our paper gives a full pipeline for a new adaptive algorithm for choosing from a set of contrast functions, which includes established ones and the new ones we introduce.
**[Re: Development of the meta-algorithm and the choice of randomization in its computation]**
Lines 115-124 build intuition for the score used in the Meta algorithm using the noiseless case. For the noisy case, one needs to account for the characteristic function of the Gaussian noise, whose covariance matrix is unknown. The second terms in JOINT and PRODUCT essentially cancel out the additional terms using \textit{estimable} parameters, i.e. the covariance matrix S of the data (estimated via the sample covariance).
Lines 148-151 discuss why we choose not to use the independence score as an optimization objective but instead use it to choose the best algorithm among candidates. This leads to the meta-algorithm in Algorithm 1.
Remark 2 explains the choice of randomization in the Meta algorithm. Essentially, all related papers that use the characteristic functions in ICA or for Gaussianity testing (see the citations in Remark 2, line 155) average over $t$ sampled from a spherical Gaussian.
**[Re: Development of contrast functions]** Section 3.2 (lines 163-168) provides the mathematical properties useful in a contrast function for designing provable optimization algorithms for ICA. Lines 169-172 explain what each of these properties implies. Section 3.3 then develops the new contrast functions that satisfy these important properties.
**[Re: Estimable parameters]** Yes, S is the estimable parameter here. We will clarify it in the revised manuscript.
**[Re: Second-factors in JOINT and PRODUCT]** See the response on the development of the meta-algorithm above.
**[Re: Bound in Theorem 2]:** This bound shows that uniformly over $F$, the empirical average $\mathbb{E}_{\mathbf{t}\in \mathcal{N}(0,I_k)}\Delta(\mathbf{t},F|\hat{P})$ is close to the population score. This guarantees that as long as the difference between the population scores of the two candidate algorithms is not too small, the meta-algorithm can pick up the better score. We will clarify this in the manuscript.
**[Re: Pseudo-Euclidean Space]** A pseudo-Euclidean space is a generalization of Euclidean space used by Voss et. al. where the produce between vectors $u, v \in \mathbb{R}^{d}$ is given as $u^T A v$ where $A$ does not need to be positive definite. See also [a] for details.
References:
[a] Wikipedia contributors. "Pseudo-Euclidean Space." Wikipedia, The Free Encyclopedia. Accessed August 6, 2024.
---
Rebuttal Comment 1.1:
Title: Post author rebuttal comments
Comment: I thank the authors for their responses. They do help clarify the specific points I raised. I'd be willing to slightly increase my score. However, my main criticism, that the paper contains a number of disparate ideas which may as well be split, still stands -- to be fair, it may not be easy to address that with a minor revision.
---
Reply to Comment 1.1.1:
Title: Clarification on how the different ideas are tied together
Comment: Dear Reviewer uqTT,
We are grateful for your response and for giving us another opportunity to explain why we believe that the different ideas or components in this paper really belong together. To show the efficacy of our meta-algorithm, we need to have a candidate pool of contrast functions whose weaknesses complement each other. For example, kurtosis is a widely used cumulant-based method for noisy ICA that has been previously shown to suffer in the presence of heavy-tailed distributions or distributions with small kurtosis. So, we also need to have contrast functions that are not cumulant-based and are less sensitive to heavy-tailed source signals. While there are many such contrast functions for noiseless ICA, there are not many with provable guarantees for noisy ICA. This is why we designed our CHF-based contrast function and developed a theoretical framework for analysis under the noisy ICA model. We explained this between lines 87-110. We will be happy to add a longer discussion about this at the beginning of the paper to better address your remarks. | Summary: This paper proposes a nonparametric score to adaptively pick the best noisy ICA algorithm from a set of candidates. This “independence score” is based on the characteristic function-based objective (CHF) introduced by Eriksson&Koivunen in 2003.
In practice, this independence score evaluates the inverse mixing matrix obtained by an ICA algorithm without requiring accessing to the true sources.
In addition, the paper proposes some new contrast functions and algorithms and present simulation results showing the effectiveness of the proposed independence score.
Strengths: • Solid theoretical approach that justifies the proposed independence score and the new derived objective functions
• Good review of relevant previous studies on ICA and noisy ICA.
• Experimental results include synthetically generated sources aswell as real data (images, MNIST) synthetically mixed.
Weaknesses: • The proposed approach focused only in the recovery of the proper unmixing matrix (B^{-1}) while in noisy ICA, the final goal is to recover the clean sources. There is no discussion about how to recover the sources after the proper unmixing matrix is obtained.
• The experimental part focused on synthetically mixed signals only. The paper would be greatly benefited from the inclusion of some real-world source separation problem.
• Theorem 2 assumes Subgaussian mixtures, which ensures the concentration of the sample covariance matrix. This assumtion is rather strong and it would be good to analyze what happen if it is not met.
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding assumption of Theorem 2 (sub-Gaussianity). I understand that assumption is needed to guaranty the concentration of the norm operator, which is a rather strong assumption. Do you think is possible to relax that assumption and keep the method working? Have you make any experiments with super-Gaussian sources, for example?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the experimental results section discuss some limitations of the proposed algorithm in highly noisy scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding our theoretical approach, literature review, and experiments. We address your comments, suggestions, and questions below:
**[Re: Recovery of source signals]** We address this concern in lines 60-65 of our paper. Under the noisy ICA model (Eq 1), both $\mathbf{x} = B(\mathbf{z}+\mathbf{g’}) + \mathbf{g}$ and $\mathbf{x} = B\mathbf{z} + (B\mathbf{g’} + \mathbf{g})$ for Gaussian $\mathbf{g}’$ are indistinguishable (see Voss et al (2015) pg 3). This makes recovery of the source signals impossible in the noisy ICA setting, even if the mixing matrix is known exactly because part of the noise could always be interpreted as part of the signal while preserving the model definition. Voss et al. (2015) propose learning the source signals by maximizing the Signal-to-Interference-plus-Noise ratio (SINR). Typically, in all related papers, algorithms are compared using the accuracy of estimating the $B$ matrix. But, with the estimated $B$ matrix one can obtain an SINR optimal estimation of the signals as in the previous citation, which we will clarify.
**[Re: Experiments with real-world source separation data]** Like many previous methodological papers, we focused on synthetically mixed real data. Having access to the ground truth signals and mixing matrix allows us to better understand the quality of the solutions of the algorithms. For example, many of the available sound-based (see e.g. [a]) datasets mix different sound sources synthetically.
**[Re: Subgaussianity assumption in Theorem 2]:** Yes, as long as the sample covariance matrix concentrates around the population covariance in the operator norm, our proof holds. For example, [b] shows that if $\mathbb{E}[|X^T u|^q]\leq L^q$ is bounded for all unit vectors $u \in \mathbb{R}^{d}$ (Eq 1.6), then the covariance matrix concentrates at a rate $(\frac{d}{n})^{\frac{1}{2}-\frac{1}{q}}$. Under such distributions, which are more general than sub-gaussians, Theorem 2 will hold with a different error rate. We will clarify this. Our experiments include exponential source signals, which do not follow the subgaussian assumption. We have added new experimental results (see pdf) with other super-gaussian sources (Laplace, Student’s t with 3, 5 degrees of freedom, and exponential) to answer your question. The Meta algorithm closely follows the best candidate algorithm even in the presence of many super-gaussian signals, which reflects that the sub-gaussianity assumption in Theorem 2 is not critical.
References:
[a] Massachusetts Institute of Technology. "Independent Component Analysis (ICA) Benchmark." MIT Media Lab. Accessed August 6, 2024.
[b] Vershynin, Roman. "How close is the sample covariance matrix to the actual covariance matrix?." Journal of Theoretical Probability 25, no. 3 (2012): 655-686.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I appreciate the authors' responses to my comments. However, despite the weaknesses not being serious, they still persist, so I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer c27m,
Thank you - we are grateful for your review and response.
We just wanted to clarify that the first weakness (re: recovery of source signals) is not exactly a weakness because it is a property of the noisy ICA model. We addressed this in lines 60-65 of our paper, and we are happy to add a more detailed discussion. Similarly, the third point (re: sub-gaussianity assumption in theorem 2) can be replaced with a much weaker condition on finite moments (as pointed out in our rebuttal). | Summary: The presented paper focuses on the problem of noisy ICA which remains a significant challenge in classical machine learning.
The authors introduces a nonparametric independent score extending the work in [21] to evaluate the estimation of the demixing matrices without requiring any prior knowledge of underlying noise distribution parameters.
Further. the authors propose some new contrast functions and provides a very interesting discussion on the convergence for the presented contrast functions of noisy ICA which can be also applied for cumulant based contrast functions.
Strengths: The introduction of a nonparametric independence score is innovative.
The paper provides an extensive theoretical analysis, including the development of new contrast functions and a detailed discussion on convergence properties.
Weaknesses: Two main weaknesses stood out to me in the paper:
1. The proposed study heavily rely on Gaussian noise assumptions. The performance of suggested score might not hold in cases with different noise characteristics. Similarly, the proposed contrast functions could have limited scope of applicability in real life scenarios..
2. The meta algorithm's effectiveness is contingent on the pool of candidate algorithms it selects from.
Technical Quality: 3
Clarity: 3
Questions for Authors: In equation 2 in the exponent of the product term \mathbf{t} seems to be missing?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The methods focus primarily on Gaussian noise, which might limit their applicability in scenarios where noise distributions do not conform to this assumption.
The global convergence analysis heavily depends on the assumption that the noise is Gaussian. This dependency raises questions about the generalizability of the convergence results to other types of noise.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding our independence score, contrast functions, and our theoretical analysis. We address your questions and suggestions below.
**[Re: Gaussian noise assumption]** The classical noisy ICA problem adds Gaussian noise to a mixture of non-gaussian independent components. This is an illustrious problem with applications ranging from signal processing and image analysis to biomedical data analysis. Nearly all the literature we found focuses on Gaussian noise. While it will be interesting to analyze the role of non-gaussian noise, that is outside the scope of this paper.
**[Re: effectiveness of Meta]** You are right. We hope that given the multitude of ICA methods we can include in the candidate pool, one can use this property as a strength of our algorithm.
**[Re: $\mathbf{t}$ missing]**: Yes, that is a typographical error. We will fix it in the revised manuscript. | Summary: This work proposes a modification of the original CHFICA characteristic function to the noisy ICA case without requiring knowledge of the noise distribution parameters.
The modified independence score is then used to select the best out of multiple ICA methods based on the score assigned to their solutions, in a per dataset basis. In practice, an average score is evaluated over random directions t, which renders the score computationally inefficient for direct Minimization.
The empirical estimate of the new independence score is shown to converge to the population score if the data covariance is concentrated.
Empirical evidence is shown that the new empirical score correlates well with the well-established Amari error.
This work also introduces two computationally efficient contrast functions to be Maximized (CHF-based and CGF-based) for BSS, both of which do not require higher moments and, thus, work well on data with near-zero kurtosis, like Bernoulli(p) with low p.
Theoretical analyses of local and global convergence are also included for a large class of smooth contrast functions that meet certain desirable properties (Assumption 1). Properties (a) and (c) are critical as they yield contrast functions that are not influenced by additive independent Gaussian noise.
The CHF contrast only requires a finite second moment and is suitable for heavy-tailed sources.
The CGF contrast is not appropriate for heavy-tailed sources.
Global convergence: The Hessian of both contrasts is shown to have the form C = BDB^T. The maxima of the Hessian-adjusted contrasts is shown to occur at B^-1 u = e_i for positive elements of D. Fast sequential power method optimization in a pseudo-Euclidean space defined by C^-1 was shown by [48]. Before optimization, C need only be computed once for some random vector u and just reused during the power iteration updates. A requirement of no sign change of the third derivative of the contrast over the half-lines is required to sufficiently distinguish between Gaussian and near-0 kurtosis source distributions.
Local convergence: Linear convergence is attainable without the third derivative requirement. Geometric convergence is attained at some low epsilon based on how non-gaussian the source is.
Contrary to prior work on cumulant-based methods, convergence of contrasts meeting Assumption 1 was established based on the characterization of the contrasts in the pseudo-Euclidean space and not on the convergence of the power method.
Experimental results support Meta's ability to select the optimal solution over various algorithms, showing variance reduction in the estimates, uniform performance across high and near-0 kurtosis.
Evaluation of the CHF and CGF contrasts show that CHF outperforms other methods at higher noise, but only at sample size n > 20000.
The Meta algorithm picks up the slack of CHF by selecting the competing PFICA solution when n < 20000.
Strengths: Originality :
- The work introduces a somewhat novel combination of known ideas in a reasonably novel formulation that adapts previous work to the case of additive Gaussian noise.
- The work differs from and extends previous contributions, dealing with near-zero kurtosis sources and additive Gaussian noise.
- The manuscript cites related work on contrast functions for BSS and noiseless ICA methods, adequately indicating the sources of inspiration.
Quality:
- The work is technically sound, with thorough proofs.
- The theoretical claims are well supported by the experiments.
- Strengths and weaknesses are discussed well.
Clarity:
- The work is written well overall, focusing on the key points and contributions.
Significance :
- The results are important as they establish a consistent framework to select the optimal solution over a range of BSS methods for noisy ICA based on the proposed modified independence score.
Weaknesses: Quality:
- The code is not friendly to readers as there is no 1-to-1 correspondence with the notation in the paper. Suggestion: improve documentation and tidy up the codebase (especially comments and function arguments) for readability.
Clarity:
- Despite the well-written manuscript and thorough methodology and proofs, the paper contains many typos (both in text and proofs) and inconsistencies in notation (especially between Appendix and main manuscript) that limit the clarity. Besides fixing these typos, explicitly include some of the "obvious" steps currently omitted in the derivation; that would greatly improve readability. Also add some high-level intuitions as noted below.
- Clarify the origin of new contrast functions and how they came about. It is unclear what motivated or led to these otherwise "semi-arbitrary" functions.
- Define O_P(.)
- It appears that the new independence score does not meet some of the properties in Assumption 1. Please, clarify if that is indeed the case and, if so, which properties specifically.
- Appendix A.3: Clarify that Algo 2 is NOT specific to contrasts based on power iteration only, but any sequential method. It is suggested to make one separate sub-section covering how the projection is done and another for the Algo description itself. Also clarify what the "Total" (non-sequential) Algo would look like, for comparison.
- Lines 119-122: t.T should be bold face.
- Eq. 2: missing last t in the PRODUCT term.
- Section 3.2: Change "Properties of Contrast Functions 1." to "Assumption 1. (Properties of Contrast Functions)"
- Line 175: remove "them"
- Line 182: "note" --> "not"
- Footnote 2 (page 5): add "and remains a linear combination of z"
- Theorem 4, line 254: Assumption 1(d) is NOT about the third derivative.
- Line 271: Define the value of k in this experiment. I see 11 mentioned in passing in line 786...
- Line 277: Fix contradicting statements: "CHF and CGF are initialized using the B estimated via PFICA" and "CHF and CGF is based on a single random initialization".
- Line 283: "also used" --> "also be used"
- Table 1 caption: Clarify that since median is being reported, Meta is better than the best method “on average”, but on any individual experiment it is identical to the best method.
- Line 299: refs to 2b and 2c are in flipped order.
- Line 315: In the noise power experiment, it is stated that "the difference between the two leading algorithms is small." Therefore, the same acknowledgement should be indicated in line 315 after "CHF dominates, and one can see that Meta starts following CHF"
- Line 456: Assumption 1(d) is NOT about the 3rd derivative constraint...
- Line 464: Missing ' in g.
- Line 472: For consistent notation, it should be Delta(t,F | P) = 0, not Delta_F(t) = 0
- Lines 474-477: there is an extra transpose symbol at the last t (several times).
- Line 510: (e) --> (d)
- Notation and conditions in lines 524-529 is not consistent with Theorem 3 in the main manuscript.
- Line 777: w_i is not defined, g_i should be ~g_i.
- Lines 784-789: the r defined here has no relation to the r in Algo 2. Please pick another letter.
- Algo 2:
- Clarify why it is necessary to evaluate Y_j per component and why it would not be equivalent to just compute W = X V^T U^T r
- for j in range[1, M] : M is not defined, but I think it would be l + 1? Also, the letter j is used for many different things inside the same loop. Change to " for p in range[1, l + 1] ", and s(j) --> s(p).
- " W(:,j) ← Y_j r " --> " W(:,a) ← Y_a r "
- I suspect r is equivalent to t in Eq. 2, if so change it to t.
- Appendix A.4: Verify that all occurrences of Assumption 1 (d) are correct (i.e., referring to symmetry not the third derivative)
Technical Quality: 4
Clarity: 2
Questions for Authors: Questions were included along with the weaknesses.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Yes, the authors addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding our work's originality, quality, clarity, and significance. We appreciate your detailed comments and suggestions, which we address below. All typographical and grammatical errors will be corrected in the revised manuscript and are not addressed individually here.
**[Re: Code]** We will reorganize the codebase to enhance readability for the revised submission
**[Re: Clarity]** In the revised version of the manuscript, we will fix typographical and notation errors, and add further details in the derivations to enhance readability and clarity
**[Re: $O_P$]** $O_P$ refers to the Big-Oh notation in probability. We will include a formal definition in the revised manuscript
**[Re: Occurrences of Assumption 1 (d)]** We will double-check and ensure that occurrences of Assumption 1(d) refer to the symmetry of source distributions, not the third-derivative condition
**[Re: Properties of Independence Score]** The new independence score does not satisfy property (a) of Assumption 1, so we do not consider it as an optimization objective but rather treat it as a diagnostic score for the Meta algorithm
**[Re: Appendix A.3]** We will reorganize the section to improve the clarity of Algorithm 2 and add further details about the total algorithm that would look like which uses this to choose the best candidate. Thank you for pointing out the wider applicability of the algorithm apart from power-method-based iterative approaches.
**[Re: Y_j per component]** Thank you for pointing it out. It is indeed true that the evaluation of Y_j can be done as suggested.
**[Re: Choice of k for experiments]** The value of k depends on the particular experiment and is specified in lines 289 and 297.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I have some follow up questions:
1. How is O_P different from just O?
2. Include a note that "The new independence score does not satisfy property (a) of Assumption 1"
3. Clarify which one is the 11-dimensional dataset mentioned in line 786... (line 297 says k=9).
4. Line 277: So which is the correct statement: "CHF and CGF are initialized using the B estimated via PFICA" or "CHF and CGF is based on a single random initialization"?
5. Origin of new contrast functions: How do you arrive at Eq. (4) from the CHF? Likewise, how do you arrive at Eq. (4) from the CGF? Although the conditions in Assumption 1 are satisfied by the new contrast functions, the conditions do not specify what f() should be. Currently the train of thought/rationale/derivation from CHF -> f() is missing (likewise from CGF -> f() ). Assumption 1 is insufficient to clarify this point.
---
Reply to Comment 1.1.1:
Comment: Thank you for your questions and for giving us an opportunity to further clarify. We address them below:
**Regarding $O_P$ and $O$**: The $X_n=O_P(a_n)$ implies that $X_n/a_n$ is stochastically bounded (or more technically uniformly tight), i.e. $\forall \epsilon>0$, there exists a finite $M$ such that $\sup_n P(|X_n/a_n|>M)\leq \epsilon$ (See [A]). We will clarify this.
**Regarding Independence Score and Property (a)**: We will do so.
**Regarding dimension of dataset**: It should be the 9-dimensional dataset. We will fix this typographical error.
**Regarding random initialization of CGF and CHF**: Both are correct. Recall that, for the power iteration-based algorithm of Voss et. al (2015) we need both a quasi-orthogonalization matrix which is an estimator of a matrix of the form $BDB^T$, and a unit vector $u$. The quasi-orthogonalization matrices for CHF and CGF use the $B$ matrix estimated via PFICA. But the initial vector $u$ for the power iteration is a random unit vector. We will clarify this further in the revised manuscript.
**Regarding the origin of new contrast functions**: We started from the fact that, like the cumulants, the cumulant generating function (and similarly the CHF-based counterpart) satisfies Assumption 1a). In order to satisfy 1c), we wish to subtract out the part resulting from the Gaussian noise, which led to the additional terms involving $u^{\top}Su$. We will add this explanation to better address your remark.
References :
[A] Vaart, A. W. van der. 1998. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. | Rebuttal 1:
Rebuttal: We want to first thank all the reviewers for their valuable suggestions and insightful feedback. We believe we have addressed nearly all of their main technical questions. In what follows, we will address some important points each reviewer has raised. We will correct all the typographical issues pointed out and will not address them here.
### **Re: Clarity (Reviewer Dz1S)**
Thank you for noting that our manuscript, proofs, and methodology are well-written and thorough. In the revised version of the manuscript, we will fix typographical and notational errors and add further details in the derivations to enhance readability and clarity.
### **Re: Gaussian noise assumption (Reviewer N8km)**
The classical noisy ICA problem adds Gaussian noise to a mixture of non-gaussian independent components. This illustrious problem has applications ranging from signal processing and image analysis to biomedical data analysis. Nearly all the literature we found focuses on Gaussian noise. While it will be interesting to analyze the role of non-gaussian noise, that is outside the scope of this paper.
### **Re: Recovery of source signals (Reviewer c27m)**
We address this concern in lines 60-65 of our paper. Under the noisy ICA model (Eq 1), both $\mathbf{x} = B(\mathbf{z}+\mathbf{g’}) + \mathbf{g}$ and $\mathbf{x} = B\mathbf{z} + (B\mathbf{g’} + \mathbf{g})$ for Gaussian $\mathbf{g}’$ are indistinguishable (see Voss et al (2015) pg 3). This makes recovery of the source signals impossible in the noisy ICA setting, even if the mixing matrix is known exactly because part of the noise could always be interpreted as part of the signal while preserving the model definition. Voss et al. (2015) propose learning the source signals by maximizing the Signal-to-Interference-plus-Noise ratio (SINR). Typically, in all related papers, algorithms are compared using the accuracy of estimating the $B$ matrix. But, with the estimated $B$ matrix one can obtain an SINR optimal estimation of the signals as in the previous citation, which we will clarify.
### **Re: Subgaussianity assumption in Theorem 2] (Reviewer c27m)**
Yes, as long as the sample covariance matrix concentrates around the population covariance in operator norm, our proof holds. For example, [a] shows that if $\mathbb{E}[|X^T u|^q]\leq L^q$ is bounded for all unit vectors $u \in \mathbb{R}^{d}$ (Eq 1.6), then the covariance matrix concentrates at a rate $(\frac{d}{n})^{\frac{1}{2}-\frac{1}{q}}$. Under such distributions, which are more general than sub-gaussians, Theorem 2 will hold with a different error rate. We will clarify this. Our experiments include exponential source signals, which do not follow the subgaussian assumption. To answer your question, we have now added new experimental results (see pdf) with other super-gaussian sources (Laplace, Student’s t with 3, 5 degrees of freedom, and exponential). The Meta algorithm closely follows the best candidate algorithm even in the presence of many super-gaussian signals, which reflects that the sub-gaussianity assumption in Theorem 2 is not critical.
### **Re: Connection between the meta-algorithm and the new contrast functions] (Reviewer uqTT)**
Most methods for noisy ICA center around designing an appropriate contrast function. Lines 87-103 in the paper, under “Individual shortcomings of different contrast functions and fitting strategies,” explain the different issues with different types of existing contrast functions. As explained in lines 104-110, this leads to two challenges: a) how do we adaptively pick an appropriate contrast function for the dataset at hand, and b) can we design better contrast functions? Our paper gives a full pipeline for a new adaptive algorithm for choosing from a set of contrast functions, which includes established ones and the new ones we introduce.
### **Re: Development of the meta-algorithm and the choice of randomization in its computation] (Reviewer uqTT)**
Lines 115-124 build intuition for the score used in the Meta algorithm using the noiseless case. For the noisy case, one needs to account for the characteristic function of the Gaussian noise, whose covariance matrix is unknown. The second terms in JOINT and PRODUCT essentially cancel out the additional terms using \textit{estimable} parameters, i.e. the covariance matrix $S$ of the data (estimated using the sample covariance).
Lines 148-151 discuss why we do not use the independence score as an optimization objective and instead use it to choose the best algorithm amongst candidates. This leads to the meta-algorithm in Algorithm 1.
Remark 2 explains the choice of randomization in the Meta algorithm. Essentially, all related papers that use the characteristic functions in ICA or for Gaussianity testing (see the citations in Remark 2, line 155) average over $t$ sampled from a spherical Gaussian.
### **Re: Origin and development of contrast functions (Reviewer Dz1S)**
Section 3.2 (lines 163-168) provides the mathematical properties that are useful in a contrast function for designing provable optimization algorithms for ICA. Lines 169-172 explain what each of these properties imply. Section 3.3 then develops the new contrast functions that satisfy these important properties.
References:
[a] Vershynin, Roman. "How close is the sample covariance matrix to the actual covariance matrix?." Journal of Theoretical Probability 25, no. 3 (2012): 655-686.
Pdf: /pdf/438ba92d38ce5c5e3f0a7c15a55d59e17d1328ce.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training | Accept (poster) | Summary: This work propose a novel data selection method, FRHOzen (Frozen Reducible Hold Out Loss), which leverages an empirical Bayes-inspired approach to derive a simple and computationally efficient selection criterion based on the relative loss values of two auxiliary models. They provide an empirical evaluation of FRHOzen on two language modeling tasks: (1) selecting data from C4 for domain adaptation to evaluation on Books and (2) selecting data from C4 for a suite of downstream multiple-choice question answering tasks.
Strengths: 1.They analyze a family of loss-based approaches for targeted selection of pre-training data, propose a simple approach that outperforms existing methods, and provide some preliminary evidence of favorable scaling properties.
2. They analyzed and compared the computation cost of FRHOzen with other similar methods and proved its computational efficiency.
Weaknesses: 1.I think this work could not match its objective of selecting optimal subsets of data for language model pre-training. The evaluation is done by fine-tuning an already-trained LM using downstream data, which could not prove the selection could work for LM pre-training. In fact, a pre-trained LM usually should not be specifically optimized for a particular downstream task (with regard to its domain-specific data).
2.In addition, the evaluation are based on olmo, which is an decoder-only LM. Exisiting pipeline don’t fine-tune for an decoder-only LMs, usually sft is needed for prompt alignment. I doubt the evaluation results may not be applicable for the real scenario.
3.The assumption of a pre-set budget n is less valid (in Algorithm 1), and the total number of optimal n is hard to predict during pre-training
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Could you provide more details for how you fine-tune the olmo using downstream data? What’s the loss function?
2. What is the unit of data point x in Equation 6?
3.Is the selected optimal data used to optimize the fine-tune phase or the pre-train phase of training?
4.What’s the difference of data points and sequences (Section 4.2 Line 215)?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As the authors have stated comprehensive in the limitation Section, I have no further comments for this part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their positive comments about how our simple method outperforms alternatives, has evidence of favorable scaling, and computational efficiency. We think these are all important strengths of the paper, and the other reviewers largely agreed.
Now we will address each of the weaknesses and questions raised in the review.
## Weaknesses
1. We think there is a *major* misunderstanding here of our methodology. We are not evaluating a finetuned model, and we are not using any pretrained models from prior work. FRHOzen is a method for filtering a pre-training corpus to select high quality data for training from scratch. We do use a small amount of downstream data as a *guide* to help us define “high quality data” for our selection procedure. In particular, if we look at algorithm 1, we first pretrain and then finetune some small *auxiliary* models in lines 1 and 2. But, these models are then used in lines 3 and 4 to create a new dataset S of high quality data from the pre-training corpus and we then pre-train models from scratch. Thus, all results in the paper are the results for the models that are pre-trained on S in various settings. For example, Figure 1a shows how this yields a pre-training corpus that is much better on a *suite* of 8 downstream tasks for a 1.2B model.
2. Again we think there is a major misunderstanding here. We use the olmo *code base*, but we do not use the olmo pre-trained model. We train models from scratch. We should note that indeed the models that we pre-train from scratch could be subsequently finetuned with SFT, but that is beyond the scope of the paper for now.
3. We do not agree here. N can be seen as the compute budget we have for pre-training (it is directly proportional to FLOPs when using e.g. chinchilla scaling to set model size). A user who is attempting to train a model can set this according to their particular constraints for their budget.
### Questions
1. Again we want to reiterate that we are not proposing a finetuning method. However, we understand there may be some confusion since we do finetune the small auxiliary conditional model. For that finetuning, we simply use the standard next-token prediction loss.
2. We are not sure we understand the question. In equation 6, x_i are sequences sampled from the unfiltered pre-training corpus and we evaluate their log likelihood under various language models (i.e. the sum of the per-token losses across the sequence).
3. The selected data (S in algorithm 1) is used for pre-training, i.e. training models from scratch. All reported results are for these models that are trained from scratch.
4. Indeed, datapoints and sequences are the same thing. Since we consider language modeling, each datapoint is a sequence of tokens. For example, in our case this will be a sequence of 512 tokens of tokenized data from a chunk of a paragraph from C4.
We hope that this clarifies any misunderstandings and that you will reconsider your review and raise your score. If there are any lingering questions, do not hesitate to post them and we will try to clear them up.
---
Rebuttal 2:
Comment: Hello! We just wanted to ping the reviewer since the discussion period is almost over and you have not yet responded to our rebuttal. We think there are some misunderstandings in the original review that the rebuttal can clear up. We also urge the reviewer to look at the other reviews and the discussion with the other reviewers to see the general positive consensus among them.
Please take a look and let us know if your assessment of the paper has changed.
---
Rebuttal Comment 2.1:
Comment: Thanks for the response. I'm going deep into the paper again to make the final decision. Pls give me some time. | Summary: This work proposes a simple, intuitive approach for data selection based on empirical bayes formulation minimizing the difference in the likelihood assigned to a candidate training sample by a model trained on a base distribution and a model trained on the base distribution plus a smaller sample of high quality target (test) data of interest. The method is scalable in that it leverages only standard training computation plus additional forward passes and empirical results suggest that it enjoys scale transfer properties where small scale experiments can be used to select data for larger training runs.
Strengths: - The setup is easy to follow, intuitive. Theoretically principled from a bayesian perspective while still feasible in practice is rare :]
- Connection to related algorithms is well detailed including complementarity and different tradeoffs against contemporary RHOLoss.
- Use of actual conditional-marginal loss gap is a strength of this method over DSDM and other influence function techniques which have been demonstrated to generalize poorly to realistic scenarios for both computational and broken assumption reasons.
- The scaling results, headline figure, but last section... are impressive. Suggests that this simple method is worth further empirical exploration and expenditure of compute in the future.
Weaknesses: - The domain transfer experiment should be broadened. Does the result of figure 2 only hold this favorably for the Project Gutenberg Books downstream target? it is possible that improvements are only strong when there exist very distinct sub-distibutions in the prior that match the target. It would improve this empirical section if the authors considered more than one downstream target distribution.
- (More minor, academic constraints assumed) Only one model architecture and scale extrapolation test setting are considered. It would be more convincing if another model family were considered and a few more scales, especially beyond 1B, as some trends in O(100M) models change dramatically beyond a few billion parameters.
Technical Quality: 4
Clarity: 4
Questions for Authors: - A diversity term seems reasonably easy to incorporate. Did the authors experiment with a diversity regularizer term or step of any sort?
- Can we explain why conditional-only worsens with relaxation of the pre-sampling efficiency constraint tau? (Figure 2)
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: - See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their thorough review and very positive assessment of our paper. In particular, they highlight the intuitive and principled algorithm, the connections to related work, the improvement over influence functions, and the impressive scaling results.
In the rest of this response we will address the weaknesses and questions raised in the review.
## Weaknesses
1a. Yes, we agree that the Books task is a somewhat toyish task. We present it since it is very low-noise and clearly conveys the potential of the algorithm (we should also note that likelihood likelihood is often a good proxy for downstream tasks, see e.g. [1][2] and references therein). For a more real-world analysis we present the same experiment for a suite of 8 downstream multiple choice tasks in figure 4, the curves are slightly less clean due to the noisy nature of multiple choice accuracy evaluations, but the general trend is the same. This mixture of 8 target distributions shows that the method can be useful across many tasks at once by leveraging the downstream data as more of a generic definition of “high quality” data rather than targeting the specific tasks. Of course it would be great to expand this to even more tasks, and we look forward to future work attempting to do this.
1b. As a step in this direction, we *add a new experiment* to test whether the 1.2B models that target our suite of 8 downstream tasks generalize to other downstream tasks that are unrelated to the selection process. We select a suite of 6 tasks (mostly for glue/super glue) related to natural language understanding and we find that even the model trained on 8x less data selected by FRHOzen outperforms the model trained on randomly selected data by 2 percent. The results are attached as a table in the PDF and in markdown in the global response, we will update the paper accordingly.
2. We totally agree that the scale is relatively small, but unfortunately pre-training beyond 1B to a ~7B model is beyond our computational constraints right now. We hope that publication of the work can encourage those with more compute to attempt such scaling.
[1] Huang et al., 2024, https://arxiv.org/abs/2404.09937
[2] Ruan et al., 2024, https://arxiv.org/abs/2405.10938
## Questions
1. In keeping with Bayesian formulation, it is actually computationally difficult to solve the full diversity issue posed by general subset selection. We did experiment with different approximations (e.g. RHO) that attempt to somewhat incorporate diversity, and find that they perform worse. We agree that this is a great direction for future work to figure out how to best incorporate a notion of diversity, but doing so with extra terms in the objective function itself is beyond the scope of this paper.
2. This is a good question. One hypothesis is that when we use conditional-only we are essentially selecting for datapoints that have low loss under the conditional model. There are (roughly) two types of datapoints where this is true (1) data that is relevant for the downstream task, and (2) data that is just easy for a language model (e.g. data that is highly repetitive). By only selecting with the conditional model we mix up both kinds of data, so selecting too aggressively can start to hurt by selecting too much of type (2). On the contrary, FRHOzen explicitly focuses the model on only selecting type (1) and not type (2).
We hope that this clarifies any misunderstandings and we encourage the reviewer to increase their score or confidence if we have resolved their concerns or to let us know otherwise so we may try to clear up any remaining confusion.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I appreciate the authors thorough response to my initial review. While I am happy with my strong score of 7 as is, I reiterate to the review pool that I believe the work is high quality, the method is simple and well motivated, and the experimental results are promising and thus recommend it for acceptance.
I think that future research would benefit from exploring the author speculation as to question 2. regarding why FRHOzen's selection signal seems more useful than the conditional method. Separating easy tokens from useful tokens is certainly at the crux of all data selection and valuation work in language modeling (throwing out trash tokens is just the table stakes) and so mechanistic analysis of why a method works at this fine-grained a level is just as important as increased scale and scope of the training experiments.
---
Reply to Comment 1.1.1:
Comment: Thanks for the kind words and indeed we agree that this is a promising direction for future work! | Summary: The paper presents FRHOzen, a new data selection method for targeted pre-training of language models, which uses an empirical Bayes-inspired approach to derive a simple, efficient selection criterion based on the relative loss values of two auxiliary models. Evaluated on tasks such as domain adaptation from C4 to Books and multiple-choice question answering, FRHOzen consistently outperforms training on eight times more randomly selected data. It also scales effectively across model sizes, with data selected by 150 million parameter models yielding improvements when used to train a 1.2 billion parameter model.
Strengths: 1. This paper is well-written. The comparison and discussion is very sound, e.g., section 4.2 on computational cost.
2. The proposed method is quite effective -- outperforms training on 8x as much randomly selected data.
3. The method's effectiveness in data selection is transferable across models of different sizes, making it scalable.
Weaknesses: 1. Section 2.1, Bayesian Data Selection, lacks rigorous derivation. This part can be considered an intuitive understanding of Bayesian optimization, but it does not constitute a strict derivation.
2. The acceleration effect brought by this paper is quite significant, but the experimental setting uses Books as D_{down} and tests it on the Books held-out set. However, in real-world scenarios, the downstream dataset used and tested should be more general, such as a combination of multiple corpora or multiple end tasks. I hope the authors can confirm that the acceleration effect brought by this paper is applicable to more general datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am curious, when the model is larger (for example, at 7B), will the acceleration still be as significant (8x) ?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their positive assessment of our paper. In particular, they highlight the clarity, the discussion of computational costs, the effectiveness of the method, and the scalability of the approach.
In the rest of this response we will address the weaknesses and questions raised in the review.
## Weaknesses
1. We are not sure we understand the point being raised here. Section 2.1 presents the main objective that we attempt to optimize and then two equalities (Bayes rule and introducing a prior). If any step here in particular is troubling, please let us know and we are happy to explain it further or improve the exposition.
2. Indeed we present the Books task as more of a didactic example than a real-world application to show how the method can be applied. That is why we also consider a suite of 8 downstream multiple choice tasks (results in figure 1a, figure 3, and figure 4). And in the new experiment presented in the global response also show that this generalizes to 6 more tasks. We agree that future work could apply the method even more broadly than this, but we think this is a reasonable proof of concept for a conference paper.
## Questions
1. We totally agree that this is interesting! But unfortunately training a 7B model is beyond our computational constraints right now. We hope that publication of the work can encourage those with more compute to attempt such scaling.
We hope that this clarifies any misunderstandings and we encourage the reviewer to increase their score or confidence if we have resolved their concerns or to let us know otherwise so we may try to clear up any remaining confusion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
1. It seems that that formula (1) lacks mention of the Model, as we cannot say the probability of one dataset given another dataset. Therefore, formula (1) refers to the probability of D_down given the Model trained on a dataset. However, the Model itself changes with the given training data and is not fixed, which makes the Bayesian part hard for me to accept. Could the author elaborate further on this?
2. My concern is not about the generalization of downstream tasks; what I mean is that in addition to validating the acceleration on specific training data (books), it is necessary to verify the acceleration on the mixture proportions of pretraining data domains, like in [1]
[1] DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
---
Reply to Comment 1.1.1:
Comment: Thanks for the quick response!
1. Thanks for the clarification, now we see where the miscommunication is. In equations 1 and 2 we are referring to the marginal likelihoods we get when marginalizing out the model parameters. We then explicitly introduce the model parameters in equation 3 (which is a straight equality from equation 2). We can see how this was maybe not well presented and could be confusing. We are happy to change the presentation so that the model parameters appear in every equation (as they do from equation 3 onwards). Would that satisfy your concern?
2. Again, thanks for the clarification. As we said before, the books example is meant to be didactic. When thinking about practical applications, we likely want to focus on performance on downstream tasks. Take the DoReMi paper that you cite as an example. In that paper, Figure 2 presents their main results which report *downstream task accuracy*. They propose a *method* for data selection that relies on mixing “domains”, where “domains” are a user-defined way to bucket the datapoints based on where they are from. You can view this as a data selection method with very coarse features (i.e. a one-hot feature of which domain a datapoint comes from). FRHOzen can also be used to improve downstream task performance, but we do this using *fine-grained* features so that we can select at the datapoint level, rather than relying on coarse user-defined notions of domain. This seems beneficial, since we can get 8x efficiency improvements in downstream accuracy compared to 2.6x for DoReMi (although the settings are not directly comparable, since they are (1) operating at a larger scale than we are able to, (2) using different data/tasks, and (3) we use data from only a single one of their “domains”). We agree that an interesting direction for future work would be to scale FRHOzen up to larger settings where the data comes from more diverse sources than C4, but there is no a priori reason that our method would not work, since it operates on a per-datapoint level with no need to define domains.
Thanks again for engaging with us. We hope this clarifies things so that you feel you can increase your score or your confidence. Please let us know if you have any more questions! | Summary: This paper proposes a data selection method that can improve the performance of language models on downstream tasks. The method uses two auxiliary models: one pretrained on the pretraining dataset, and the other is finetuned on the downstream task using that pretrained model. Then the method selects the data that has the largest difference in loss between the two models. Authors formulate the method using Bayesian optimization and show that the method is trying to maximize the likelihood of sequences that will appear in the downstream task. The authors conduct experiments using a 150M parameter model and show that it can be transferred to a 1.2B model. Using the selcted data, the model can achieve better performance on tasks, even when it's pretrained on much fewer tokens.
Strengths: 1. The method is novel and intuitive. The idea of selecting data based on the difference in loss between two models is interesting and can be easily understood. The method is also well-motivated and well-explained in the paper.
2. The method is well-formulated. It can be viewed as maximizing the posterior likelihood of the downstream sequences.
3. The authors have thoroughly discussed the relevance of the method to other data selction methods.
4. The experiments can very well support the claims. The method is shown to be effective in improving the performance of the model on downstream tasks. Moreover, the method seem to reduce the computational cost of pretraining because the model can be pretrained on much fewer tokens when using selected data.
Weaknesses: 1. The paper has an aim of "Pre-training", but the models being tested are relatively small (150M and 1.2B). It's unclear whether the method can scale to larger models (eg, 10B, 100B). The authors should test the method on larger models to show the scalability of the method.
2. In order to select the pretraining data that can improve the downstream performance, the method requires knowing the downstream tasks ahead of time. This is a limitation of the method because in practice, we might not always know the downstream tasks when we are pretraining the model. And people using want the pretrained model to be versatile and not just good at a few tasks.
3. It's not very clear what will happen to the downstream tasks that are not included in the data selection process. It's unclear whether there are side effects of the data selection process on unknown tasks.
4. As the authors mentioned, "In particular, the FRHOzen objective no longer encourages the selection of a diverse dataset". So a lack of diversity and overfitting might be a concern when using the selected data.
5. Using a probability-based view, the method is effective in improving the likelihood of the downstream sequences. However, it does not explain or guarantee anything about RLHF/preference-based fine-tuning. And it's unclear how it will impact the model's safety and robustness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What will happen to the downstream tasks that are not included in the data selection process? Will the model perform worse on those tasks?
2. How do you choose the downstream tasks? Any criteria?
3. Do you only consider maximal likelihood for the downstream tasks? What about other downstream finetuining methods, such as RLHF, DPO, etc?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations with respect to approximations, data diversity, and computational cost are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their thorough review and largely positive comments. In particular, they highlight that the method is novel, intuitive, well-formulated, situated wrt related work, and has strong experimental results.
In the rest of this response we will address the weaknesses and questions raised in the review.
## Weaknesses
1. We totally agree that the scale is relatively small, but unfortunately training a 10B model is beyond our computational constraints right now. We hope that the publication of the work will encourage those with more compute to attempt such scaling.
2. This is a reasonable concern and indeed we may not always know all of the downstream tasks. We would like to point out that this is a larger issue that faces the entire field of language model training. It is not clear how we should define the desired behaviors of a language model or how we should evaluate the performance in general, and it is an active area of research. One primary approach these days is to use a suite of downstream tasks. Our goal with this work is to show how this notion of quality can be leveraged to perform data selection. Moreover, there is no limit on the downstream tasks that can be used as targets in D_down. In our experiments we use a suite of 8 tasks, but we hypothesize that this method could scale to larger suites of tasks (of course this would have to be proven in future work).
We also add a new experiment (see the general response and attached PDF) showing that the selected data using our suite of 8 tasks also improves performance by a similar amount on 6 novel tasks.
3. This is highly related to point 2 above, but also a reasonable concern. Note that we do not use the evaluation data directly, but we do use training sets derived from the same tasks. We also add a new experiment to test whether the 1.2B models that target our suite of 8 downstream tasks generalize to other downstream tasks that are unrelated to the selection process. We select a suite of 6 tasks (mostly for glue/super glue) related to natural language understanding and we find that even the model trained on 8x less data selected by FRHOzen outperforms the model trained on randomly selected data by 2 percent. The results are attached as a table in the PDF and in markdown in the global response, we will update the paper accordingly. Thanks for suggesting this issue so that we could add this experiment!
4. Yes, we agree that a lack of diversity is definitely a worry with this method (as we raise in the paper). However, as we also point out, related existing ideas for how to maintain diversity have serious computational problems and just don’t work as well. We hope that future work can uncover whether this is really a problem at scale or whether starting from enormous web scraped data (which is inherently diverse) and not selecting too aggressively means that this is not a practical issue.
5. Again we agree that the likelihood is merely a proxy for things we may care about downstream like reward functions or safety, but we think it is a reasonable proxy. For some further evidence that likelihood is often a good proxy see e.g. [1][2] and references therein. Moreover, using likelihood importantly facilitates the bayesian analysis that yields a highly efficient method for selection. It’s not exactly clear how to target other metrics with a similar method, but it is definitely an interesting direction for future work.
[1] Huang et al., 2024, https://arxiv.org/abs/2404.09937
[2] Ruan et al., 2024, https://arxiv.org/abs/2405.10938
## Questions
1. See point 3 above and the added experiment.
2. We chose the suite of downstream tasks following prior work (OLMo). But in general, this is up to the user. Our paper is focused on presenting the methodology, but this methodology is very flexible (which is why we tried to show it on two very distinct targets: multiple choice evals and books).
3. Yes, we only consider likelihood at the metric for the tasks when performing selection. This follows directly from our bayesian derivations. It is possible that a similar method could be derived with a different objective, but it is not clear how the bayesian machinery would work when there is an unknown reward function we are targeting. An interesting direction for future work!
We hope that this clarifies any misunderstandings and we encourage you to increase your score if we have resolved your concerns or to let us know otherwise so we may try to clear up any remaining confusion. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for their constructive comments. We hope we have resolved any misunderstandings.
One experiment suggested by the reviews (particularly reviewers yqth and ZLEF) was to test if the data selected by FRHOzen for downstream tasks generalizes to new downstream tasks that are unrelated to the conditioning set. To this end, we evaluated the final checkpoints of the 1.2B models from figure 1a on a new suite of 6 downstream natural language understanding tasks and we report the results in the table below (and in the attached PDF). We find that the FRHOzen data does generalize, indicating that it is not overfit to the tasks, but is picking up on generic notions of data quality for natural language understanding. Even with 8x less data, the FRHOzen models outperform randomly selected training data by nearly 2 points.
| Method | copa | rte | commitment bank | sst2 | commonsense qa | social iqa | Average |
|--------------------------------|-------|-------|-----------------|-------|----------------|------------|---------|
| Random (24b tokens) | 69.2 | 49.1 | 43.2 | 46.9 | 33.9 | 42.6 | 47.5 |
| FRHOzen ($\tau=16$, 6b tokens) | **70.2** | **51.4** | 41.6 | 55.8 | **35.6** | **44.3** | **49.8** |
| FRHOzen ($\tau=32$, 3b tokens) | 67.8 | 50.1 | **46.0** | **55.8** | 32.5 | 43.5 | 49.3 |
Pdf: /pdf/f18db766130976eabd13b9af120340f3d0fd0a23.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Discrete Flow Matching | Accept (spotlight) | Summary: This paper presents Discrete Flow Matching, a new method for generating discrete data, such as language. The approach uses a general family of probability paths between source and target distributions. It offers a formula for sampling from these paths using learned posteriors. By focusing on specific probability paths, it improves generative perplexity compared to previous similar models. When scaled up, the method achieves notable performance on benchmarks. This approach bridges the gap between autoregressive models and discrete flow models.
Strengths: The method is well-motivated for solving the discrete-state data in flow matching.
Weaknesses: - In line 32, you mention one advantage of FM is its flexibility in handling non-Gaussian target distributions. Have you demonstrated this case?
- The method's description is unclear and confusing. I cannot distinguish which part pertains to Campbell's methods and which part is yours.
- In Table 2, it's unclear why the results from Austin et al. [2021a] are poor, as no explanation is provided.
- In Line 214, I'm not entirely sure what the difference is. Perhaps a table could clarify it?
- In Figure 3, the Inception Score (IS) needs to be included to demonstrate the diversity of image generation.
- Except the perplexity, how about the performance on BLEU, bertscore?
- Diversity is a significant concern for flow-based models. What about the result on diversity-related metrics?
- In F.1, in the context of conditional generation, it's unclear what the source (src) and target (tgt) samples are. Are they conditional prompts, target prompts, or a combination of Gaussian noise and target prompts?
- In section H, there is no qualitative analysis of unconditional generation.
- In Section H, I did not see the color red. I only observed the background colors: grey and yellow.
- It will be great that the code can be provided.
- Related works are missing. it's important to note that flow matching has been utilized in various domains to capture the reader's interest. E.g., boosting diffusion[1],image generation[5], depth estimation[2], motiom[3], even text generation[4].
[1]. Boosting Latent Diffusion with Flow Matching
[2]. DepthFM: Fast Monocular Depth Estimation with Flow Matching
[3]. Motion Flow Matching for Human Motion Synthesis and Editing
[4]. Flow Matching for Conditional Text Generation in a Few Sampling Steps
[5], Latent Space Editing in Transformer-based Flow Matching
Technical Quality: 2
Clarity: 2
Questions for Authors: as above
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: as above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question: In line 32, you mention one advantage of FM is its flexibility in handling non-Gaussian target distributions. Have you demonstrated this case?** Yes, in fact all the probability paths we use in this paper are non-Gaussian. We worked with mask source distribution which corresponds to a delta function concentrated on a special “mask” token.
**Comment: The method's description is unclear and confusing. I cannot distinguish which part pertains to Campbell's methods and which part is yours.** Thank you for the comment. We build upon Campbell’s work [1] but generalize it in ways that allow significant improvement in performance. In particular, we offer the following contributions: (i) we consider arbitrary data coupling $(X_0,X_1)$ and use it for conditioning; (ii) we offer a novel family of probability paths (equation 8) that generalizes the paths used in Campbell as particular cases; (iii) in particular, we show that incorporating polynomial schedulers $\kappa_t$ considerably improves performance; (iv) we provide a unified and closed-form formula for marginal probability velocity (rate) in equations 14, 16, 17 and show it has the exact same form as in continuous Flow Matching, see Table 1. Campbell provided this rate as an expectation and resorted to compute it individually for the masked and uniform noise cases; (v) we develop a general yet closed form formula for corrector sampling with arbitrary schedulers (equation 23). This generalizes Campbell’s stochastic sampling constant $\eta$ ($\alpha_t = 1 + t\eta$ and $\beta_t = \alpha_t - 1$), and we also note that Campbell’s stochastic sampling (Proposition 3.3 and equation 9 in [1]) incorporates the detailed balanced matrix in an implicit way and therefore requires a particular solution in each case; (vi) we show that particular polynomial correctors schedulers provide a further significant boost in results.
[1] Andrew Campbell, Jason Yim, Regina Barzilay, Tom Rainforth, and Tommi Jaakkola. "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design."
**Question: Why are the results from Austin et al. [2021a] poor?** We use the Pytroch implementation from https://github.com/cloneofsimo/d3pm, with the same architecture (DiT), tokenizer (GPT2), and data (Open Web Text) as we use for our model. We will add these details to the revised version.
**Comment: In Line 214, I'm not entirely sure what the difference is.** As mentioned above (see (v),(vi) in the above answer) compared to [1] we provide more general (arbitrary schedulers) and closed form corrector steps. The discrete and continuous diffusion works [2, 3] define only corrector *iterations* and not sampling (i.e., do not progress in time, similar to case (ii) described in line 280 in our submission) and perform corrector iterations by incorporating equal forward and reverse rates for the diffusion probability paths they consider. In our case we develop a closed form corrector step incorporating arbitrary schedulers (see $\alpha_t,\beta_t$ in equation (23)) that includes both corrector **sampling** and corrector **iterations** as special cases.
[2] Andrew Campbell, Joe Benton, Valentin De Bortoli, Thomas Rainforth, George Deligiannidis, and Arnaud Doucet. "A continuous time framework for discrete denoising models."
[3] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. “Score-based generative modeling through stochastic differential equations.”
**Comment: In Figure 3, the Inception Score (IS) needs to be included to demonstrate the diversity of image generation.** Per the reviewer’s request, we added a corresponding inception score graph in the PDF attached to the main response of rebuttal. Note that we observed a similar trend to the FID demonstrated in the original submission.
**Question: Except the perplexity, how about the performance on BLEU, bertscore?** BLEU and BERT-score are traditionally used for the “data-data” case (e.g., translation), that is when both source and target samples are data, and then generated samples are compared to target (test) data. We use “noise-data” (except the conditioning part) and therefore we couldn't find a reasonable way to apply these metrics in our case.
**Comment: Diversity is a significant concern for flow-based models. What about the result on diversity-related metrics?** We are not aware that flow-based models have a diversity concern in general, maybe the reviewer refers to distilled flow models? In any case, we didn’t encounter particular diversity issues with our model, and in Tables 2, 3, and Figure 7, with additional details provided in Appendix F.1, we present the entropy of tokens within generated sequences to illustrate the diversity of the model's predictions. Furthermore, our modeling allows for control over the diversity by adjusting the temperature during sampling from $p_{1|t}$.
To further address the reviewer’s concerns, we attach in the rebuttal’s main reply several uncurated generations, sampled with the same prompt. These uncurated generations demonstrate the diversity of our model's output.
**Comment: In F.1, in the context of conditional generation, it's unclear what the source (src) and target (tgt) samples are. Are they conditional prompts, target prompts, or a combination of Gaussian noise and target prompts?** In the context of conditional generation (Appendix F.1), the source-target pairs $(X_0,X_1)$ are as described in Equation 5 in the paper, i.e., $(X_0,X_1) = (\mathbb{I} \odot X_1 + (\mathbf{1}-\mathbb{I})\odot (\mathbb{m},\ldots,\mathbb{m}) , X_1)$, where $\mathbb{I} \in \set{0,1}^{N}$ is indicating the conditioning mask. There are no target prompts, nor a combination of Gaussian noise/target prompts. We did our best to understand the reviewer’s question, if we got it wrong, please clarify.
---
Rebuttal 2:
Comment: **Comment: In section H, there is no qualitative analysis of unconditional generation.** Per the reviewer's suggestion, we added unconditional samples, generated by our model (attached in the pdf in the main response). We will add these samples to the revised version.
**Commnet: In Section H, I did not see the color red. I only observed the background colors: gray and yellow.** This is a mistake, prompts are marked in gray, we will fix it in the revised version.
**Comment: It will be great that the code can be provided.** We are planning on releasing the code not much after the paper is published.
**Comment: Related works are missing.** We thank the reviewer for the suggestion, we will add the relevant related works to the revised version.
---
Rebuttal Comment 2.1:
Title: thanks for your reply
Comment: i am looking forwards to the code.
---
Reply to Comment 2.1.1:
Comment: We are planning to release the code around the time of publication. As we near the end of the discussion period, we would be happy to address any further clarifications or concerns regarding the other comments raised by the reviewer. | Summary: The paper proposes an approach to generative modelling for discrete data, i.e. multidimensional distributions where the variable along every dimension can take value in a finite set. This is an alternative approach to autoregressive generative modelling for discrete data which is currently actively studied for language and code generation.
The philosophy of the proposed method is heavily inspired by the Flow Matching algorithm [1] and Continuous Time Markov Chains (CTMC), which were previously used in [2] to propose a similar model. That is, for a given CTMC, the authors define the vector field generating samples from this CTMC by local updates of the samples (independently along every dimension). Based on the PMF (Probability Mass Function) of the CTMC, the authors derive a formula for the vector field. Furthermore, they introduce the continuity equation analogous to the continuous case, which allows for an easy validation that the change of the density given by CTMC corresponds to the vector field.
The authors perform an ablation study of some of their design choices and extensive empirical studies for generation of code, language, and discrete-valued images. The proposed model outperforms the competitors bridging the gap between flow-based models and autoregressive models.
[1] Lipman, Yaron, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. "Flow matching for generative modeling." *arXiv preprint arXiv:2210.02747* (2022).
[2] Campbell, Andrew, Jason Yim, Regina Barzilay, Tom Rainforth, and Tommi Jaakkola. "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design." arXiv preprint arXiv:2402.04997 (2024).
Strengths: The paper presents a complete study of the actively studied topic in the field. It is clearly written, the developments presented in the paper are novel and are properly studied empirically. The paper is of great interest to the NeurIPS community.
Weaknesses: The paper presents a complete study and its methodological part does not raise any major concerns. However, I would like to ask the authors to clarify the following question.
Why do posteriors in eq. 15 define a proper distribution? Indeed, according to eq. 8, the linear combination of the conditional distributions with the schedulers should define a correct distribution (sum up to $1$). If we simply try to sum over all possible $x^i$ in eq. 15, we won’t get $1$, hence eq. 15 does not define a posterior. This is an important detail when defining the objective in eq. 25, which is the cross entropy between distributions.
The minor
- Descriptions of the schedulers on lines 96 and 98 are not complete, i.e. there are no conditions that schedulers have to define correct probability distributions in eqs. (9,10) and the scheduler $\kappa^3$ is missing a description.
- Figure 2. The plots presented in the figure have $d = 2$ unlike what is stated in the caption.
- At the beginning of the paper, the authors introduce the notation of random variables as capital letters (e.g. $X_t$). This creates confusion in equations 13-17 because there the authors clearly mean $X_t$ to be the value of a random variable.
- There is a typo in line 160 when describing the panel of Fig. 2.
- There is a typo in line 163 when defining the vector field $v(x,z)$.
- There is a type in line 582. Index $\ell$ is not a function of $j$ according to eq. (36).
- I haven’t checked thoroughly, but I think equations on the top of page 19 should have a summation over $j\neq \ell$ instead of the summation over all possible $j$.
Technical Quality: 4
Clarity: 3
Questions for Authors: I would suggest adding a discussion of the variable-length generation. It is an important difference with autoregressive modelling and I’m wondering how the authors handle this issue given that their model has two important properties:
1. Conditional generation for partially masked sequences.
2. Independence of the conditional distributions between dimensions.
The questions I would like to have answered are:
1. Does one have to define the length of the generated sequence before generation? If so how this can be decided?
2. What’s the computational cost if one continues generating the sequence in the autoregressive way?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: NeurIPS Paper Checklist does not follow the required format.
The paper adequately discusses the limitations of the proposed approach. The only thing that is not cover enough in the paper is the variable-length generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question: Why do posteriors in Equation 15 define a proper distribution?** Equation 15 is a proper distribution as follows:
$$\sum_{x^i} \hat{w}^j_t(x^i|X_t) = \sum_{x_0,x_1} \overbrace{\big( \sum_{x^i} w^j(x^i|x_0,x_1)\big )}^{=1} p_t(x_0,x_1|X_t) = \sum_{x_0,x_1} p_t(x_0,x_1|X_t) = 1,$$
where in the first equality we change the summation order.
**Comment: Descriptions of the schedulers on lines 96 and 98 are not complete.** The conditions in lines 96 and 98 should be understood as _additional conditions_ to the general conditions presented in Line 91, i.e., $\sum_{j} \kappa_{t}^{i,j}=1$ and $\kappa_{t}^{i,j} \ge 0$. Combining the conditions in Line 91 with the ones in Lines 96 and 98 guarantees proper distributions. We realize now this is confusing and will clarify this in the revised version of the paper - thank you.
**Comment: Figure 2. The plots presented in the figure have $d=2$ unlike what is stated in the caption.** This is indeed a typo, but should be $d=4$ (and $N=2$), that is, the figure depicts the state space of two tokens $x=(x^1,x^2)$ where each token can get a value in a vocabulary of size $d=4$.
**Comment: Random variable notation creates confusion.** We agree with the reviewer that this notation is confusing and we will use a lower case letter, e.g., $z$, instead in the revised paper.
**Comments: Typos Lines 160, 163, and 582.** Thanks, we will fix it in the revised version.
**Comment: Summation on page 19.** Please note that in the case of $j=\ell$ the term in the summation equals zero, thus, summation over $j \ne \ell$ will yield the same result.
**Question: Does one have to define the length of the generated sequence before generation? If so how this can be decided?** No, the length does not have to be defined before generation. We train the model using flattened data samples which are separated by an end-of-text (EOT) token. The model can predict EOT token in any location in the generated sequence $x^1,x^2,\ldots,x^N$ and the EOT will indicate the end of the generated text (similar to autoregressive modeling).
**Question: What is the computational cost if one continues generating the sequence in the autoregressive way?** The model generates a sequence of length $\leq N$, where $<N$ length can happen if the EOT token is predicted. If it does not (some texts are longer than $N$ tokens) one can potentially continue to generate by conditioning on the last $K<N$ tokens and predict the next $N-K$ tokens, where $K$ is user defined: e.g., large $K$ will provide more context but shorter extension. The cost of predicting these $N-K$ new tokens will be equivalent to a full sequence generation with our model. It is very interesting to develop methods to accelerate the generation, but we leave it to future research.
---
Rebuttal Comment 1.1:
Title: response acknowledgment
Comment: Thank you for your response! Everything is clear. Sorry for the confusion in some of the questions. | Summary: The paper introduces a novel approach called Discrete Flow Matching, which adapts continuous flow models to discrete sequential data. It extends prior work by integrating discrete state spaces and time-dependent schedulers into a unified framework for non-autoregressive generative modeling. Methodologically, Discrete FM employs generating probability velocities derived from learned posteriors and schedulers, enabling efficient sampling and correction processes. This approach facilitates the transformation of noise distributions into target data distributions with enhanced flexibility and performance. Experimental evaluations across language modeling, code generation, and image generation tasks demonstrate significant improvements over existing methods. Specifically, the model achieves state-of-the-art results on various benchmarks, including HumanEval and MBPP coding tasks, showcasing its efficacy in generating high-quality outputs.
Strengths: - The methodology design is solid with formal analysis and mathematical proofs.
- The paper conducted a set of experiments in language modeling, code generation, and image generation tasks, showcasing the most promising results to date in a non-autoregressive context.
- Technical details about the methodologies are rich. And the experimental setup includes detailed descriptions of the methodologies used, such as masked source training, conditional couplings, probability path schedulers, and corrector steps, which are pivotal for the model's performance.
Weaknesses: - The method is heavily based on the Continuous-Time discrete Markov Chain (CTMC) paradigm from Campbell et al., although this paper proposed theoretical and empirical improvements such as the unified formulation for more general probabilistic paths and velocity as well as the scheduler designs.
- There are still performance gap in code generation evaluation when compared with autoregressive language models.
Technical Quality: 4
Clarity: 4
Questions for Authors: - I am a bit confused with Eq 10 especially the scheduler terms. Please explain or tell if there are mistakes.
- What are the main reasons for discrete flow matching to require significantly large number of evaluations?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors acknowledged their limitations such as the number of evaluation is high compared to continuous flow matching.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment: I am a bit confused with Eq 10 especially the scheduler terms.** Equation 10 proposes a second instantiation of a conditional probability path $p_t(\cdot|x_0,x_1)$ where given some pair $(x^i_0,x^i_1)$ of source and target tokens, the token at time $t$ is: $x^i_1$ with probability $\kappa^1_t$; $x^i_0$ with probability $\kappa^3_t$; and a uniformly distributed random token with probability $\kappa^2_t$. This probability path is reminiscent of the brownian bridge in the continuous diffusion world. Lastly, note that the conditions on the schedulers $\kappa^j_t$ that guarantee this conditional path interpolates between $\delta_{x_0}(x^i)$ and $\delta_{x_1}(x^i)$ and stays a proper PMF for all $t\in[0,1]$ are the ones described in lines 98 and 91. We will clarify this in the revised version.
**Question: What are the main reasons for discrete flow matching to require a significantly large number of evaluations?** Discrete flow matching requires a higher number of function evaluations compared to its (deterministic) continuous counterpart. We attribute this to its stochastic sampling, similar to sampling by approximating a solution of an SDE which typically possesses a lower strong convergence order than their ODE solvers counterparts. In particular, for the Euler sampling case the deterministic Euler method has strong convergence order of $1$ while the non-deterministic Euler (Euler-Maruyama) has strong convergence order of $\frac{1}{2}$, see e.g., [1]. While this gives some intuition we do agree an analysis of global convergence of discrete sampling is interesting and defer it to future work; in this paper we only show local convergence error (see $o(h)=O(h^2)$ term in equation (22)).
[1] Sauer, T., 2011. Numerical solution of stochastic differential equations in finance. In Handbook of computational finance (pp. 529-550). Berlin, Heidelberg: Springer Berlin Heidelberg.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response that addressed my concerns. I would keep the rating to recommend acceptance. | Summary: The paper presents a discrete flow matching method for modeling discrete data with discrete state space. The paper presents unified frameworks for training and sampling from the discrete probabilistic model. Importantly, the paper also studied scaling up the model to 1.7B parameters and tested the model on code generation tasks. Comprehensive experiments shows that the proposed method outperform existing methods and also close the gap to autoregressive models.
Strengths: 1. Impressive empirical studies with 1.8B scaling up model. To my knowledge, this is the first discrete diffusion that scaled up to this scale.
2. The paper provides a principled view of the discrete diffusion model, and proposes several novel techniques such as backward sampling in conditional generation scenarios.
3. The paper is very clearly written and well organized.
Weaknesses: The paper lacks some experiments on common benchmarks used in existing discrete diffusion models, e.g., LM1B and OWT. I understand the scaling-up experiments on HumanEval are more challenging, but I believe a comparison on the common benchmark would be important to justify the effectiveness over existing methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I didn't find limitations specifically discussed in the paper, though the author marked it discussed in the checklist. Please correct me if I missed it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment: The paper lacks some experiments on common benchmarks used in existing discrete diffusion models, e.g., LM1B and OWT.** First, please note that our experimental setup already includes the OWT dataset (see Table 2 and lines 268-273). Second, per the reviewer’s suggestion, we trained and evaluated our and baselines models on the LM1B dataset, please see Table 1 in the main rebuttal reply.
_Experimental setup._ For the reviewer’s convenience, we would like to detail all the evaluations done in this work. We experiment with two data modalities, text and images on small and large scale datasets.
| Modality | Scale | Datasets | Compared to | Metrics | Comments |
|---|---|---|---|---|---|
| Text | Small | Open web text (OWT) | State-of-the-art prior works, autoregressive modeling | Generative perplexity (using Llama 2, 3, and GPT2), Entropy, NFE | All models are evaluated without temperature annealing. |
| Image | Small | CIFAR10 | Campbel et al., Maskgit | FID, added inception score in this rebuttal, NFE | |
| Code | Large | Large scale code mix | Autoregressive (note: no other discrete diffusion/flow worked on this tasks before) | Pass@1, Pass@10, Pass@25 on Humaneval and MBPP | |
| Text | Large | Large scale text mix | Autoregressive, Savinov et al. | Generative perplexity (using Llama 2, 3, and GPT2), Entropy, NFE | |
**Comment: I didn't find limitations specifically discussed in the paper.** In the conclusions (Section 5), we mention the following limitations: (1) Discrete flow matching requires a higher number of function evaluations compared to its (deterministic) continuous counterpart. We attribute this to its stochastic sampling, similar to sampling by approximating a solution of an SDE which typically possesses a lower convergence order than their ODE solvers counterparts (see e.g., [1]). (2) There remains a performance gap between autoregressive modeling and our proposed approach.
[1] Sauer, T., 2011. Numerical solution of stochastic differential equations in finance. In Handbook of computational finance (pp. 529-550). Berlin, Heidelberg: Springer Berlin Heidelberg. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' insightful feedback on our paper. We address each of the comments/questions raised by the reviewers in the specific threads below. We would be happy to address any remaining concerns during the discussion period.
Here we summarize the new experiments we performed during rebuttal period to address remaining concerns of reviewers:
1. As suggested by reviewer oYBb, in Table 1, we report a new comparison to the baselines on the LM1B dataset.
| Method | NFE | Llama-2 | Llama-3 | GPT2 | Entropy |
|---|---|---|---|---|---|
| Data | | 5.9 | 7.7 | 17.6 | 8.0 |
| Han et al. | >10000 | 67.8 | 97.6 | 150.1 | 8.0 |
| Lou et al. | 256/512/1024 | 27.9/26.1/23.7 | 41.7/39.2/35.0 | 129.3/120.9/104.2 | 8.1/8.1/8.2 |
| Campbell et al. | 256/512/1024 | 26.4/25.7/23.1 | 39.7/34.9/32.6 | 121.7/115.1/99.3 | 8.2/8.2/8.2 |
| FM (Ours) | 256/512/1024 | **24.2/23.4/20.8** | **35.2/33.2/29.2** | **110.6/106.6/86.7** | 8.2/8.2/8.1 |
2. As suggested by reviewer 7VEG, we have computed the inception score for both our proposed and baseline CIFAR10 models. The results are presented in the attached PDF, and demonstrate a similar trend as with the reported FID metric in the submission.
3. As suggested by reviewer 7VEG, we added unconditional samples of our 1.7B Discrete Flow Matching text model to the rebuttal’s PDF.
4. To address reviewer 7VEG concerns about diversity of predictions, we added below more **uncurated** qualitative samples, generated with the same prompt.
**Uncurated qualitative conditional samples**
**Einstein was a German-born theoretical physicist who is widely held to be one of the greatest and most influential scientists of all time. Best known for developing the** famous theory of relativity, Einstein was also well known for his advocating for women in the field of science. Find out everything to know about him and scientific theories about the universe.
Albert Einstein is widely held to be one of the most significant scientists of the 21st century. He is also acknowledged for the multibillion impact of his work.
Described as a mathematician, physicist and philosopher, he is mainly regarded for his theories on the universe. And so, he is also considered to have had an immense impact. It is estimated that somewhere around 10,000 people contributed towards the creation of his theories. As a result, the concepts, that he created, and in his case, that of his brilliant genius, and his theories that afforded him the Theory of Relativity, have all consolidated until this day. So, let’s learn everything to know about Einstein and all of his theories and works.
**Albert Einstein was a German-born theoretical physicist who is widely held to be one of the greatest and most influential scientists of all time. Best known for developing the theory of the** earth movement, the theory of relativity, and the principles of the theory of general physics and the theory of special relativity, which are precursors to the quantum theory.
He was the second leading theoretical physicist in the 20 history of science and a cosmopolitan statesman. He is among the most important and influential figures of all time in the development of best known for developing the theory of the earth movement, the theory of relativity and the principles of the theory of general physics and the theory of special physics. Einstein's theories of relativity, which emerged in the first decade of the twentieth century, describe the relationship between the two poles of the universe. His theories also summarize the relationship between gravity and electromagnetism, the speed of light (sometimes used as the "speed"), special effects including telekinesis (the illusion of self), and time travel, and a popular but flawed quantum theory.
**Albert Einstein was a German-born theoretical physicist who is widely held to be one of the greatest and most influential scientists of all time. Best known for developing the** laws of motion and the theory of general relativity, Einstein's contributions developed over time to create his final achievement in physics, relativity with its general theories and theory of gravity, sometimes called the special theory of relativity. He is widely considered as the father of modern quantum mechanics, wrote his Aclonomical Revolution (one could be called the greatest pedagogues of all time), revealed the Newtonian Square of suggestion, the Doppler effect for light and a gyromagnetic effect, all of which lead to the standard physics of quantum mechanics.
Pdf: /pdf/632d5c9bd90820342d46cac639e067bc9549d906.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction | Accept (poster) | Summary: BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction introduces a model architecture for multi-agent simulation of dynamic traffic actors. Improved simulation capabilities are essential to the safe and rapid development of autonomous vehicles. BehaviorGPT structures multi-agent simulation a next-patch-predition problem. Given a history of trajectory patches (embeddings of trajectory subsequences of fixed length), the model predicts the next patch's set of states for each actor independently. These predictions can then be treated as a patch and the process can be repeated in an autoregressive rollout. Through use of relative encodings (in both time and space) and a "decoder-only" architecture, BehaviorGPT can be trained in parallel any continuous subset of patches. BehaviorGPT is state-of-the-art on the Waymo Sim Agents Benchmark with impressive scores for both trajectory accuracy (minADE) as well as realism. Furthermore, BehaviorGPT accomplishes these SOTA results with only 3M parameters (an order of magnitude lower than other competing approaches).
Strengths: - The paper has several empirical strengths. Most notably it is SOTA on a challenging benchmark (the Waymo Sim Agents Benchmark) despite being an surprisingly small model (3M parameters). Improvement on this benchmark is meaningful as simulation (and the existing sim2real gap) is a major challenge for development and deployment of autonomous vehicles.
- The manuscript does an excellent job of contextualizing this work within the growing cannon of motion forecasting and simulation literature.
- The manuscript does a nice job providing intuitive motivations for their architectural choices. For example, the RNN in the prediction head is clearly motivated by the sequential nature of trajectory simulation.
Weaknesses: At a very high level, I think this paper presents a model which produces impressive results on an important benchmark (which represents an important problem). However, I think that the current manuscript falls short in multiple dimensions. I do not expect or think that the authors must make substantive changes with respect to all of the points below. However if the authors were able to expand or address a couple of the weaknesses described below it would significantly improve the strength of the submission.
## 1. Scope down claims slightly
The results are super impressive, but I would scope down some of the adjacent claims. For example, there is the assertion that there is a big sample efficiency gain over MTVA and Trajeglish but there is no evidence to support that. I also think that the claim of being the first "decoder only" model is a bit overstated; this is largely semantic. Other authors writing the same paper may have called your decoder the encoder and the prediction head the decoder. Additionally the paper Query-Centric Trajectory Prediction had an awesome ablation in their paper where the reduced the encoder to zero layers and still had excellent results). Instead of focusing on being decoder only, focus on the ability for further parallelize training. I also find the statements around "next patch prediction" to be a bit strange because you predict states not a patch ... it only becomes a patch when it is embedded by the decoder.
## 2. Reproducibility
I don't find the current manuscript to be sufficiently detailed to reproduce the model. (Code release will fix a lot of this). Some specific examples include:
- More details are need around map tokenization - I would not know how to translate your text into code here.
- In the prediction head it is not clear to me if there is a single RNN and a single MLP which is outputing all the mixture components or there are multiple copies (I assume the former ... but its not clear).
- Embedding sizes?
## 3. Manuscript quality
- The figures and captions could use work. In general the captions are far to brief and do not provide enough information to a reader skimming the paper. The figures themselves, with the exception of figure 2 provide little explanatory value. Figure 3 is a poor use of space as most of the figure is an identical cartoon repeated three times. It would be great to replace some of the non-useful figures with visual representations of the model output (i.e. real examples).
- References need to be cleaned up ... I only skimmed but a couple examples include [18] where Densetnt -> DenseTNT and [48] should cite the NeurIPS 2023 manuscript rather than arxiv.
- I would steer away from introducing the concept of a scene level patch $P^{\tau}$ as it is never actually constructed.
## 4. Forecasting metrics/leaderboards
As noted in the paper, the simulation problem and the forecasting problem are quite similar. The minADE numbers suggest that this method would also perform well on motion forecasting benchmarks. I would love to see numbers there. I do believe the Waymo Sim Benchmark is an important one, but it is also new and has not had as many competitive submissions as the Waymo or Argoverse forecasting benchmarks. How does this model compare with QCNet on those benchmarks?
## 5. Model introspection
It is not clear from this analysis, which elements of the architecture are actually important for its performance. I would like to see more ablations. What happens when we drop out one of the factorized attention modules? Does the model really get any worse if we don't to the a2a attention? Does the model improve/degrade if we only do each attention once, or three times? Is the RNN actually important? What if we just predict with an MLP and interpolate? Are the patches actually important or just going from 10Hz to 1Hz?
## 6. Output introspection
It is typical for papers in this space to provide analysis beyond aggregate metrics. There should be a visual analysis of the output. Examples should be provided of instances where the model produced excellent output (specifically in comparison to other methods). Examples should also be provided of instances where the model produced poor outputs. Are there any connecting themes? I.e. does the model struggle with certain road geometries or interactions?
## 7. Alternative validation of value
The real claim of this paper is that they have produced a model which will produce more realistic simulations which in turn lead to improved development of autonomous vehicles. Benchmarks are proxy evidence for this claim. Can this be supported in some other way as well? If we were to train a planner or motion forecasting method on output from this sim vs. another sim, would it perform better?
## 8. Inference performance
Autonomous vehicle companies need to run simulations at massive scales. (Tens of millions of scenarios). To that end it would be nice to see the paper report the inference performance of their model (on reported hardware).
## Conclusion
As noted in the beginning, not all of these areas need to be addressed in the limited rebuttal window. 1-3 should definitely be addressed along with some subset of 4-8 (7 is a long shot for sure).
Technical Quality: 3
Clarity: 2
Questions for Authors: It wasn't clear how many patches were available as history during training or inference?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive suggestions, which greatly help improve the quality of our manuscript. In the following, we attempt to address some critical concerns you raised.
**1. Scope down claims slightly**
(1) **Sample efficiency**: We evaluate our models trained with different proportions of training data. Our model is able to achieve very decent performance when trained on merely 20% of data, which is attributed to the high sample efficiency of our approach.
| Training Data | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: |
| 20% | 1.4881 | 0.7396 | 0.9207 |
| 50% | 1.4060 | 0.7427 | 0.9250 |
| 100% | 1.3804 | 0.7438 | 0.9268 |
(2) **Decoder-only**: We agree that the notion of "decoder-only" is somewhat semantic. We will revise the introduction of our paper to emphasize our approach's capability of parallel training instead of claiming "the first decoder-only agent simulator."
(3) **Next-patch prediction**: In our approach, each token in the decoder has fused the information of multi-step states and takes charge of generating multiple subsequent states; since we define multiple consecutive states as a patch, we think it appropriate to use the term "next-patch prediction."
***
**2. Reproducibility**
More details on implementation will be supplemented in the revised version. Below are the ones that you are concerned about:
(1) **Map tokenization**: We sample points along the polylines every 5 meters and tokenize the semantic category of each 5-meter segment via learnable embeddings. The shape of the embedding table is [17, 128], indicating 17 categories and a hidden size of 128.
(2) **RNN head**: A single RNN+MLP outputs all the mixture components.
(3) **Hidden size**: All hidden sizes are 128.
***
**3. Manuscript quality**
(1) **Figure**: Thank you for the suggestions. We will adjust the figures (e.g., removing the redundant cartoon in Figure 3 and deleting Figure 4) to leave some space for qualitative results. We will also expand the captions to describe more details.
(2) **Reference**: Thank you for spotting the issues. We will clean up the references carefully.
(3) **Notation**: Indeed, the notion of scene-level patches is not that important in our model. We will change Equation 2 into an inline equation.
***
**4. Forecasting results**
BehaviorGPT considers the **joint distribution of ALL agents in the scene**, which is misaligned with the objective of existing motion prediction benchmarks. For example, the Waymo Motion Prediction Benchmark and the Argo 2 Single-Agent Motion Forecasting Benchmark are for marginal prediction; the Waymo Interaction Prediction Benchmark and the Argo 2 Multi-Agent Motion Forecasting Benchmark concern the joint distribution of a subset of agents in the scene. Indeed, we can customize our model architecture and training objective to obtain high scores on motion forecasting benchmarks, but that is out of the scope of this work. Thus, it is more reasonable to compare our approach with joint multi-agent motion prediction models that consider the joint distribution of all agents. We also noted that QCNeXt, a joint multi-agent prediction model that has won 1st place in the CVPR 2023 Argo 2 multi-agent motion forecasting challenge, is also on the WOSAC leaderboard. We found that QCNeXt performs much better on minADE (1.08 vs our 1.54), but its closed-loop performance lags far behind most simulation-oriented models. It seems to be a trade-off between open-loop and closed-loop performance.
***
**5. Model introspection**
(1) **A2A attention**: Without A2A attention, the model cannot capture agent interactions.
| A2A | minADE $\downarrow$ | Realism $\uparrow$ | Collision $\uparrow$ |
| :---: | :---: | :---: | :---: |
| ✓ | 1.6247 | 0.7349 | 0.9409 |
| ✗ | 2.1489 | 0.6659 | 0.6987 |
(2) **#Layers**: We try to fix the hidden size as 128 and vary the number of attention layers. We found that increasing the depth of the models can benefit the performance.
| #Layer | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: |
| 1 | 1.7318 | 0.7319 | 0.9149 |
| 2 | 1.6247 | 0.7349 | 0.9163 |
| 3 | 1.5381 | 0.7387 | 0.9199 |
| 4 | 1.4881 | 0.7396 | 0.9207 |
(3) **Patching**: We try to increase the replan frequency by discarding a portion of the predicted states at each simulation step. The test set results produced by the model with a patch size of 10 are shown as follows. From the table, we can see that increasing the replan frequency from 1 Hz to 2 Hz can even improve the overall performance, which may benefit from the enhanced reactivity. This phenomenon demonstrates that the performance gain is not merely due to the lower replan frequency, as the model with a patch size of 10 beats that with a patch size of 5 even harder if using the same replan frequency (i.e., 2Hz). However, we found that an overly high replan frequency harms the performance, as indicated by the third row of the table. Overall, we conclude that using a larger patch indeed helps long-term reasoning, but a moderate replan frequency is important for temporal stability.
| Patch Size | Replan Frequency | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: |
| 10 | 1 Hz | 1.5405 | 0.7414 | 0.9308 |
|10 | 2 Hz | **1.4147** | **0.7473** | **0.9349** |
| 10 | 5 Hz | 1.5693 | 0.7342 | 0.9089 |
***
**6. Output introspection**
Please refer to the PDF file in the general response.
***
**7. Alternative validation**
Validating the usefulness of data-driven simulators for motion planning/prediction is definitely something worthy of doing. This will be our next step!
***
**8. Inference**
The average latency per simulation step is less than 10 ms on an NVIDIA L20 GPU (seq length: 9 secs).
***
**9. Number of patches**
9-sec sequences are used for training, while 1-sec trajectories are used as initial history during inference.
---
Rebuttal Comment 1.1:
Title: Rebuttal followup
Comment: Thank you to the authors for their thorough rebuttal. I look forward to reading the revised manuscript. I have a couple of small follow up points:
*On sample efficiency* - The claim in the paper is a comparative claim. I.e. this method is "more efficient" than other methods. The table provided demonstrates that this method performs well with a fraction of the training data. Is there any reason to believe that other methods wouldn't scale similarly?
*On inference latency* - I would love more information here. The 10ms, how many agents is that for? Is that a mean? What is the variance of inference times?
In short, the authors have addressed most of my concern and pending a revised manuscript I would significantly raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive feedback!
Comment: Thank you very much for the positive feedback! We are committed to improving the quality of our manuscript based on the discussion during the rebuttal, though we are unable to upload the revised paper at this stage according to the policy of NeurIPS 2024.
Regarding your follow-up questions:
**1. Sample efficiency.**
Currently, few or no methods on the benchmark have been open-source, which brings difficulties in replicating different models and evaluating their sample efficiency directly. Perhaps a piece of indirect evidence supporting our claim is Figure 11 in the Trajeglish paper, which indicates that the Trajeglish model is data-hungry. More direct evidence is that the 5M BehaviorGPT trained on 20% data can achieve lower minADE than the Trajeglish trained on 100%.
**2. Inference efficiency.**
The number is averaged over all scenarios on the validation set, with the standard variance of 3ms. The number of agents on the validation set is 65$\pm$36 (max: 128, min: 2). | Summary: This work focuses on multi-agent simulation for autonomous driving. Instead of commonly used encoder-decoder structure, the authors propose a decoder-only autoregressive architecture for better data utilization, and achieve SOTA on Waymo SimAgents Benchmark.
Strengths: * The low data utilization problem using common encoder-decoder structure is identified, which requires a sequence be split into history and future. In proposed autoregressive architecture, each time step is treated as current, resulting in higher data utilization.
* Next-Patch Prediction Paradigm is introduced to force the model perform long-range interaction, preventing the shortcut learning in next-token prediction.
Weaknesses: * It seems that no solid evidence is provided to prove the model performance "scales seamlessly with data and computation". I do not think Table 4 can support this assertion.
* Though parameter-efficient, it may not be superior in terms of computation or inference latency, providing more details about this would be better.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank your valuable feedback. To resolve your concerns, we try our best to conduct some scaling experiments under the constraints of limited time and computing budget.
**Q1: Can BehaviorGPT scale with data and computation?**
**A1:** We answer this question regarding the quantity of training data, the hidden size, and the number of decoder layers.
(1) **Quantity of data**: We evaluate our models (5M parameters, hidden size = 128, #Decoder layer = 4) trained with different proportions of training data provided by the Waymo Open Motion Dataset. As shown in the table below, our model is able to achieve very decent performance when trained on merely 20% of training data, which is attributed to the high data efficiency of our approach. Increasing the proportion of training data from 20% to 50% further improves the performance across various metrics. Moreover, training on 100% of the data continues to make the model more powerful. Judging from the trend indicated in the table, we believe that feeding more data for model training will continuously improve the overall performance.
| Training Data | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Linear Acceleration $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: |
| 20% | 1.4881 | 0.7396 | 0.3633 | 0.3181 | 0.9207 |
| 50% | 1.4060 | 0.7427 | 0.3637 | 0.3203 | 0.9250 |
| 100% | 1.3804 | 0.7438 | 0.3655 | 0.3227 | 0.9268 |
(2) **Hidden size**: We vary the hidden size to obtain models with different parameters. The experiments use 20% of the training data and 2 layers of the Transformer decoder. As depicted by the table below, increasing the hidden size consistently improves the performance.
| Hidden Size | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| 64 | 800K | 1.9637 | 0.7251 | 0.3369 | 0.9229 | 0.9056 |
|128 | 3M | 1.6247 | 0.7349 | 0.3546 | 0.9409 | 0.9163 |
|192 | 7M | 1.4993 | 0.7382 | 0.3646 | 0.9439 | 0.9185 |
(3) **Number of decoder layers**: We also try to fix the hidden size as 128 and vary the number of decoder layers, obtaining models by training on 20% of the data. Based on the experimental results below, we can conclude that increasing the depth of the models can benefit the performance.
| #Decoder Layer | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| 1 | 2M | 1.7318 | 0.7319 | 0.3465 | 0.9319 | 0.9149 |
| 2 | 3M | 1.6247 | 0.7349 | 0.3546 | 0.9409 | 0.9163 |
| 3 | 4M | 1.5381 | 0.7387 | 0.3570 | 0.9450 | 0.9199 |
| 4 | 5M | 1.4881 | 0.7396 | 0.3633 | 0.9481 | 0.9207 |
It is a pity that we do not have more computing resources to experiment with even larger models, and we welcome researchers with sufficient computing resources to examine the scalability of our architecture after we contribute our code to the open-source community.
***
**Q2: What about the inference latency?**
**A2**: The average latency per simulation step is less than 10 ms on an NVIDIA L20 GPU (seq length: 9 secs).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response, i think most of my concerns are resolved. I would like to have my rating unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for the feedback! | Summary: This paper proposes a new decoder-only learning scheme for autonomous driving dynamics. Rather than using an encoder-decoder type architecture, the model uses spatial, temporal, and “social” attention between map-agent, time-agent, and agent-agent respectively in a time-autoregressive manner. Further, the work explores the usage of “next patch prediction”, where multiple timesteps are bundled, transformed, and unrolled using an RNN. This allows for easier modelling of long-range dependencies and more efficient processing.
Strengths: **Originality**: To our knowledge, this is the first work in Traffic Simulation utilizing decoder-only autoregressive prediction, so there is a significant degree of novelty.
**Quality** The different “arbitrary” selections, such as the number of neighbors to consider, the top-p to sample and patch size are explicitly ablated, which is nice to see. The modelling of the multiple domains present in autonomous driving seems to be sound.
**Clarity**: The paper is generally structured well and readable.
**Significance**: To our knowledge this is the first fully autoregressive model for autonomous driving simulation. The work is also a clear step up from the previous solutions in the WASAC 2023 challenge.
Weaknesses: **Originality**: The comparison against previous timeseries-transformers is rather lacking. I would like to see Autoformer and informer mentioned, and, possibly the most relevant here the Space-Time transformer (https://arxiv.org/pdf/2109.12218) which models local interactions within a timestep using graphs (similar to your k-nearest neighbor all2all attention).
**Quality**: Something I would like to see though is a scaling comparison: Transformers tend to perform exponentially better as they scale larger, so seeing something like a 30M model to match the “Trajeglish” work would be nice. It is unclear whether, for instance, the patching is actually necessary to the degree described, or whether this is just an artifact of the model being rather small and shallow. Prior work on the expressivity of transformers, such as https://arxiv.org/abs/2205.11502 has shown that one can map K reasoning steps onto a K-deep model, so it might be that a patching of 10 items is only necessary because the model is too shallow to learn the combination itself. I'm aware that scaling a method might not be possible due to the expense of training a larger model, but without it one cannot be sure of the contributions of patching. I will not see it as a big negative if this is impossible because the performance is good enough to argue for the efficacy of the entire system as a whole, but I'm unsure about the importance of each individual component.
**Clarity**: The notation of the patches is a little hard to parse. I would write
$$S_i^{((\tau-1)\times (l+1))\ :\ (\tau \times l)}$$
to make the grouping more clear (It took me way too long to mentally group the terms together…). Specifically, it’s hard to visually group the two sides of the “:” together. Figure 1 is slightly confusing: I understand what you want to show, but maybe put boxes to group the timesteps into tokens such that it becomes clear the transformer predicts the next token which is composed of multiple timesteps.
Something I did not quite understand was the tokenization of the map: You say that you sample every 5m and then assign a class to that sample. Does that mean you assign a class to the span from the last sample to the next one (i.e. you assign a class for [0m, 5m], [5m,10m],...) or just at that sample point (i.e. you assign for 0m, 5m, 10m). In the latter case I would assume you can easily jump over crucial information like center lines since those are less than 5m wide.
**Significance**: The small size of BehaviorGPT is nice, but not too interesting for many real world problems: Generally, I would rather have a more accurate 30M model (that e.g. closes the gap to MVTE in acceleration), than a less accurate 3M variant. There are obviously advantages to a model being small, but the runtime difference between a 3M and a 30M parameter model is rarely big enough to make the former worth it (you can run a ResNet 50 with 25M parameters on a raspberry pi in real time…).
Technical Quality: 3
Clarity: 2
Questions for Authors: What happens when scaling the model up? Does the need for patching vanish?
What are the results after 20% of training? (particularly interesting because you criticize the data-inefficiency of prior art)
Why are mixing coefficients only predicted once per patch?
Why does training take so much time? (I expect a 3M timeseries model to train a lot more quickly)
Does training time change significantly with larger patches? (due to RNNs needing to be unrolled)
Why only autoregressive across time and not also across e.g. space?
You claim that
> Developed the Next-Patch Prediction scheme to enhance models’ capability of long-range interaction reasoning, leading to more realistic multi-agent simulation over a long horizon;
Can you support this by plotting e.g. accuracy vs. prediction horizon?
As is, it is unclear whether the higher performance is actually due to long-horizon performance or just an improvement in the short term and then matching the long-horizon performance.
More generally: How is the generalisation to longer sequence lengths? Generalisation to longer sequences is a known problem for transformers.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: All models considered are really small, making it hard to judge whether the individual components of the model are necessary or just an artifact of the expressivity of small models being low.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments. We will revise the statements, notations, and figures according to your suggestions. In the following, we attempt to address your critical concerns by giving more analyses and clarifying the implementation details.
**Q1: Comparisons with timeseries/spatial-temporal Transformers.**
**A1**: Thank you for pointing out these relevant works. We will discuss them in the revised version.
***
**Q2: Scaling experiments.**
**A2**: Experimenting with models as large as Trajeglish is beyond the reach of our computing resources, but we try our best to conduct some scaling experiments under the constraints of limited time and computing budget. We conduct two groups of experiments, including (1) varying the hidden size when fixing the number of decoder layers to be 2 and (2) varying the number of decoder layers when fixing the hidden size to be 128. Although the experimental results show that larger models can achieve better results, the gain in performance also seems to plateau at the scale of 5-7M parameters. we can expect from the trend that continuing to enlarge the model would not bring too significant improvement.
| Hidden Size | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: | :---: |
| 64 | 800K | 1.9637 | 0.7251 | 0.9229 | 0.9056 |
|128 | 3M | 1.6247 | 0.7349 | 0.9409 | 0.9163 |
|192 | 7M | 1.4993 | 0.7382 | 0.9439 | 0.9185 |
| #Decoder Layer | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: | :---: |
| 1 | 2M | 1.7318 | 0.7319 | 0.9319 | 0.9149 |
| 2 | 3M | 1.6247 | 0.7349 | 0.9409 | 0.9163 |
| 3 | 4M | 1.5381 | 0.7387 | 0.9450 | 0.9199 |
| 4 | 5M | 1.4881 | 0.7396 | 0.9481 | 0.9207 |
***
**Q3: Tokenization of the map.**
**A3**: Each map token spans 5 meters, and a semantic category is assigned to the 5-meter segment. We will clarify this in the revised version.
***
**Q4: Significance of small models.**
**A4**: We understand that today is the era of large models, but we must point out that many industrial applications, such as autonomous driving, are still using small models (some false advertising on large models might somewhat mislead consumers). As also indicated by the Reviewer gNhc, autonomous driving companies need to run simulations at massive scales before real-world deployment, so simulation efficiency determines how fast we can upgrade the autonomous driving system. In industry, pursuing 1% more realistic simulation with 10x larger models may not be a wise strategy. Thus, we believe small models can also make a great impact on real-world applications.
***
**Q5: Does an extremely large model need the patching design?**
**A5**: Since it is impossible for us to train a super-large model, we are unable to reach a conclusion. However, we have shown that the patching design helps small models beat up very large models, demonstrating the value of the patching mechanism--if autonomous driving companies can attain satisfying simulation quality with a small model, there is no reason for them to spend way more money on training and deploying an extremely large model.
***
**Q6: Data efficiency.**
**A6**: We evaluate our models (5M parameters, hidden size = 128, #Decoder layer = 4) trained with different proportions of training data. Our model is able to achieve very decent performance when trained on merely 20% of training data, which is attributed to the high data efficiency of our approach.
| Training Data | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Linear Acceleration $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: | :---: |
| 20% | 1.4881 | 0.7396 | 0.3633 | 0.3181 | 0.9207 |
| 50% | 1.4060 | 0.7427 | 0.3637 | 0.3203 | 0.9250 |
| 100% | 1.3804 | 0.7438 | 0.3655 | 0.3227 | 0.9268 |
***
**Q7: Why are mixing coefficients only predicted once per patch?**
**A7**: Because we desire the mixing coefficients to represent the likelihood of *multi-step* behavior.
***
**Q8: Why does training take so much time?**
**A8**: Besides model size, the training time is also determined by the computational complexity. On the one hand, our model is trained on 9-second sequences at 10 Hz. On the other hand, the agent simulation task is more than a timeseries problem, as most traffic scenarios involve hundreds of agents. Last but not least, the Waymo Open Motion Dataset is one of the largest datasets in autonomous driving, involving 574 driving hours and a 1.4 TB download size.
***
**Q9: Does training time change significantly with larger patches?**
**A9**: We did not notice a significant change in training time when using a patch size of 10, which may be due to the light weight of the RNN head (1 GRU layer with a hidden size of 128).
***
**Q10: Why only model the space dimension autoregressively?**
**A10**: While it is natural to model the time dimension autoregressively, it is difficult to determine the order of agents in the chain. Image autoregressive models (e.g., PixelCNN) also face this problem, where the order of pixels in the chain may significantly affect the performance.
***
**Q11: Long-term vs short-term.**
**A11**: Please note that scenarios are generated autoregressively rather than in one shot. Without good short-term performance, it is unlikely to have good long-term performance owing to compounding errors.
***
**Q12: How about the extrapolation ability to longer sequences?**
**A12**: We tried training a model on 5-second sequences and generating 9-second sequences during inference. The results below demonstrate the extrapolation ability of our approach.
| Training | Inference | minADE $\downarrow$ | Realism $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :--- | :--- | :--- | :--- | :--- | :--- |
| 9s | 9s | 1.6247 | 0.7349 | 0.9409 | 0.9163 |
| 5s | 9s | 1.6294 | 0.7333 | 0.9375 | 0.9100 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough followup!
> Scaling experiments / Significance of small models
The scaling experiments are very interesting.
The reason I'm so hung up on model size is that up-to a certain size (depending on e.g. the cache size of the GPU) there is only a small impact when increasing the model size significantly, so increasing the model size by a factor X would not affect the runtime by that same factor X. This is especially true during inference where models need much less memory due to the lack of gradients.
I do think that especially the second table still shows improvements as the model's depth is increased (plotting minADE is an almost linear improvement per added layer), but as I mentioned I do not hold the lack of training ressources against this work. It is still interesting that when increasing the depth we could expect further improvements.
> Does an extremely large model need the patching design?
> if autonomous driving companies can attain satisfying simulation quality with a small model, there is no reason for them to spend way more money on training and deploying an extremely large model.
Fair argument.
> Data efficiency
That's a pretty good result even after 20% of training. I recommend to add this into at least the appendix to support your claim of higher data efficiency (line 86 in your paper).
>Why are mixing coefficients only predicted once per patch?
> A7: Because we desire the mixing coefficients to represent the likelihood of multi-step behavior
Let me check whether I get this right: you effectively treat every chain of RNN calls as one possible "future" and then mix the different paths with your mode weights? I.e. you assume the distribution of paths is multi-modal, while each path in isolation remains unimodal.
In that case your modelling makes sense (though I'm not sure how realistic that "unimodal within path" assumption is).
> Q8: Why does training take so much time?
Oh, I did not consider that the dataset is 1.4TB worth of motion data. In that case it makes sense that training through 100% of the data takes as much time as it does.
> Q9: Does training time change significantly with larger patches?
Good to know: My concern was that the recursive evaluation of the RNN itself made the training time as long as it took, but if that isn't a problem than that's great.
> Q10: Why only model the space dimension autoregressively?
Makes sense
> Q11: Long-term vs short-term. / How about the extrapolation ability to longer sequences?
The model being able to extraplolate to ~2 times its training length is sufficient for me to support your claim in line 87 (maybe also add this into the appendix).
It's worth noting that "long range interaction" can mean very different things: In your case "long range" is 90 steps (9s@10Hz) while other transformer papers talk about "long ranges" in the order of 1000 to 16000 steps (e.g. https://arxiv.org/abs/2011.04006). However, I'm also not sure how you can easily clarify this since what people think of "long range" heavily depends on their background.
From my end, the authors have adressed all my concerns to the best of their ability and I would increase my score to an accept (7).
---
Reply to Comment 1.1.1:
Title: Thanks for your positive feedback!
Comment: Thank you very much for the positive feedback! We will supplement these important experimental results and continue to improve the quality of our paper as you suggested.
Below is a bit more discussion regarding your comments:
**1. Assumption of unimodal within a path.**
The motivation behind this assumption is that a path represents a high-level intention of agents, with the corresponding mixing coefficient capturing the likelihood of the intention (which we call "intention uncertainty"). Each step of a path is modeled as a unimodal distribution, with the variance of the distribution capturing the "control uncertainty."
**2. The notion of "long-range."**
We are aware that the meaning of "long-range" depends on the specific context. For decision making and motion planning in autonomous driving, 9-second sequences are fairly long (imagine yourself as a driver or a pedestrian: is it trivial to anticipate what will happen on the road 9 seconds later?). Since we have confined our paper to the domain of autonomous driving, we think the notion of long-range interaction would not be that confusing, but we will still clarify the sequence length produced by our model in the revised paper.
Thank you again for the insightful comment! | Summary: This work presents BehaviorGPT, a model for trajectory prediction which is decoder only and respects temporal causality by employing a autoregressive sequence model. They opt for a coarser time resolution of the sequence they call "patching" for reasons of efficiency and larger context, in analogy to word-level instead of byte-level representations for language sequence models. After predicting a coarser time segment ("patch") embedding, they then decode this into the finer time resolution sequence of states also respecting temporal causality with a RNN decoder within the "patch".
The model is decoder only, in the sense that they can process an infinite temporal stream where agent input representation is the same as agent output representation. For the static map information, this is encoded once and cross-attended to in the rest of the decoder-only model. The inner architecture is interleaved attention between map, agents, and time dimension.
They report exceptional performance results on the Waymo Open Sim Agents Challenge (WOSAC), which requires the models obey temporal causality to respect a sim agents use case. At the same time, their model is 92% smaller than competitive methods, which is substantially more parameter efficient.
Strengths: This is a significantly novel decoder-only model, with reasonable complexity, with very strong results.
The ideas like "triple attention", QCNet's positional encoding are present in other papers. Predicting coarser state subsequences is a straightforward idea. But piecing these ideas together into the full decoder only model results in a significant new model (in the specific area of multi-agent trajectory prediction).
With caveats, the paper is mostly very clearly explained, and will definitely have impact for other practitioners to replicate/extend.
Weaknesses: The biggest doubt I have about the impact of this paper is: is the impact of the paper
A. the decoder only architecture or
B. "patching", aka lower resolution modeling for efficiency reasons
Without patching, the model performs significantly worse than other top methods (compare minADE with patchsize=1 in Table 3 to minADEs in Table 1). Thus,
- does decoder-only really matter?
- if one added the patching idea to Trajeglish or MVTE or MTR++, would those also perform much better?
My hunch is patching is doing the heavy lifting here, but that is not clear in the story.
If this hunch is true, one story of this paper is "other methods faithfully adhere to the overly-fine 10hz native processing, but BehaviorGPT found a way to bypass that inefficiency"
Technical Quality: 4
Clarity: 3
Questions for Authors: Q1. I have a big pet peeve with the terminology "patching", which is why I mostly keep it in quotes throughout this writeup. To me (as a native english speaker), the notion of a "patch" only makes sense to describe a 2D or 3D region. Calling a subsequence or segment of a 1D sequence a "patch" is very unintuitive. In language models and in computer science when referring to 1D sequences in general, more common terms are subsequence, (sub)segment, or chunk. In my opinion, "chunk" is nice, and gives me intuition that your process is analogous to this: https://chunkviz.up.railway.app/
So, not a question, but please consider this!
Q2. How does this model do on the Waymo Open Motion Dataset's Motion Forecasting benchmark? Many papers in Table 1 also report results there, so I am curious.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful comments! We fully understand your concerns and hope to explain our motivation as well as some subtle details regarding your doubt.
**Q1: Does decoder-only architecture really matter?**
**A1**: Indeed, it is possible to achieve very good results with an encoder-decoder model, but our paper intends to convey the message that "decoder-only models are sufficient to do very well." In other words, it may not be necessary to design some intricate modules to encode the history and employ some well-designed DETR-like decoders for future prediction as done by the most advanced trajectory prediction models, such as Wayformer, MTR, and QCNet. Given that using a much more concise decoder-only architecture can attain super strong results, there is no reason for us to pursue a more complicated solution due to the principle of Occam's razor. Besides the conciseness, combining decoder-only Transformers with relative spatial-temporal positional embeddings has advantages in engineering, as we can utilize the key-value cache technique to reuse past computations. By contrast, typical encoder-decoder models need to encode the history and run the full decoder at each simulation step, which is quite inefficient.
Moreover, we also found that decoder-only models have very high sample efficiency. Compared with encoder-decoder models where each scenario is treated as one pair of history and future, BehaviorGPT models each time step as the current one and requires each state to model subsequent states’ distribution during training, which is equivalent to constructing training samples from every possible history-future pair of the time series. To illustrate the sample efficiency of BehaviorGPT, we evaluate the models (5M parameters, hidden size = 128, #Decoder layer = 4) trained with different proportions of training data. The table below shows that using merely 20% of the data for training has already enabled the model to surpass many SOTA. The high sample efficiency allows small models to achieve incredible performance, which is also an essential reason for using decoder-only architecture.
| Training Data | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: |
| 20% | 1.4881 | 0.7396 | 0.9207 |
| 50% | 1.4060 | 0.7427 | 0.9250 |
| 100% | 1.3804 | 0.7438 | 0.9268 |
***
**Q2: If one added the patching idea to other models or lowered the replan frequency, would those also perform much better?**
**A2**: We believe the patching idea can be applied to other autoregressive models (e.g., Trajeglish). But one thing we want to point out is that a lot of top methods on the WOSAC are adapted from SOTA motion prediction models (e.g., MTR). These methods do not necessarily comply with the closed-loop requirements of agent simulation and often produce 8-second trajectory points in one shot. Thus, these open-loop methods have already been operating at an extremely low frequency (i.e., 0.125 Hz, which is forbidden). However, our approach outperforms them by a large margin when operating at a moderate replan frequency.
To help readers understand how the replan frequency may affect the simulation results, we varied the replan frequency of the same model (patch size = 10) during inference, which can be achieved by discarding a portion of the predicted states at each simulation step. The test set results are shown as follows. From the table, we can see that increasing the replan frequency from 1 Hz to 2 Hz can improve the overall performance, which may benefit from the enhanced reactivity. This phenomenon demonstrates that the performance gain is not merely due to the lower replan frequency, as the model with a patch size of 10 beats that with a patch size of 5 even harder if using the same replan frequency (i.e., 2 Hz). However, we found that an overly high replan frequency harms the performance, as indicated by the third row of the table. Overall, we conclude that using a larger patch indeed helps long-term reasoning, but a moderate replan frequency is important for temporal stability, which may be neglected by prior works.
| Patch Size | Replan Frequency | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: |
| 10 | 1 Hz | 1.5405 | 0.7414 | 0.9308 |
|10 | 2 Hz | **1.4147** | **0.7473** | **0.9349** |
| 10 | 5 Hz | 1.5693 | 0.7342 | 0.9089 |
***
**Q3: Is it appropriate to use the terminology "patching"?**
**A3**: Thanks for your comment, and we agree that the notion of a "patch" is more suitable for describing 2D or 3D stuff. In fact, BehaviorGPT uses a multi-agent formulation to simulate all agents' states simultaneously. In this sense, our formulation involves multiple dimensions, including the agent dimension, the time dimension, and the state dimension (3D position + 2D velocity + 1D yaw angle). Thus, we think it appropriate to use the notion of a "patch" in our case.
***
**Q4: What about the motion forecasting results?**
**A4**: As mentioned above, many approaches on the WOSAC leaderboard are adapted from typical marginal motion forecasting models, so they can be tested on the Waymo motion prediction benchmark without any effort. In contrast, BehaviorGPT is a generative model that considers the joint distribution of all agents in the scene, which is misaligned with the objective of the Waymo motion prediction benchmark. Thus, it is more reasonable to compare our approach with other joint multi-agent motion prediction models. In fact, the WOSAC leaderboard also evaluates minADE, the most commonly used metric for motion prediction. We also noted that QCNeXt, a joint multi-agent prediction model that has won 1st place in the CVPR 2023 Argoverse 2 multi-agent motion forecasting challenge, is also on the WOSAC leaderboard. We found that QCNeXt performs much better on minADE (1.08 vs our 1.54), but its closed-loop performance lags far behind most simulation-oriented models. It seems to be a trade-off. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Please find the qualitative results and the loss curves in the attached PDF file.
Best,
Authors
Pdf: /pdf/12407c41ccd44466581dd1d00db0b608f1e6bdb0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a decoder only model for traffic simulation. The paper proposes to not predict the next state/token but to predict the next patch a small trajectory segment. The network can select several possible patch generators which are based on RNNs. The resulting architecture achieves strong results with a small transformer based architecture only using about 3M parameters.
Strengths: Proposed approach is interesting and works well. The method combines several ideas that have been shown to be efficient and combines them in a network for traffic simulation. The results of the model given its size are impressive and it would be interesting how it performed if the size was scaled up.
Weaknesses: -The ablation study is not fully clear to me, I can not see a base model, and I also do not know what parameters are used in the final model.
-Most of the ideas in the paper are known in other related fields. Not just predicting the action/next state is known in imitation learning to help with compounding errors. It is also getting used in LMMs [1]. The transformer architecture design is similar to HPTR.
-A lot of details to reproduce the results are not clear (see questions)
[1] Better & Faster Large Language Models via Multi-token Prediction, Gloeckle. et al.
Technical Quality: 3
Clarity: 3
Questions for Authors: -Is it correct that you have as many RNNs as modes, with N_mode different parameters?
-How do you select the 32 trajectories for the submission?
-If you keep all modes with p>=0.9 do you not have issues with an excessive amount of trajectories? Or do you just keep the top-1?
-Are there more tricks in the post processing, this is normally important to achieve good results for the sim agent challenge.
-Is it correct that your input tokens are continuous?
-Do you run the patch generation in a receding horizon fashion, where the model would run at 10Hz even though you generate 1s outputs. If not, would it be possible to do so?
-Why is the training so slow? I believed that one of the advantages of a decoder only model compared to more policy learning based methods such as TraffiBot would be training speed. Since there is not autoregressive rollout during the training
-Why do you use a RNN for the patch generation, could a simple MLP not do the same job and be faster?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations could be discussed in more details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable feedback. In the following, we answer your questions to resolve some critical concerns.
**Q1: Details of final models and ablation studies.**
**A1**: We list the detailed configurations of each group of experiments as follows, which will be added to the revised version.
| Table | Training Data | Evaluation Set | #Param | #Max Agent | #Decoder Layer | Hidden Size | Patch Size | RNN Head | #Neighbor | #Mode | top-p |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| 1 | 100% | Test | 3M |128 | 2 | 128 | 10 | Autoregressive | 32 | 16 | 1.0 |
| 3 | 20% | Val | 3M | 128 | 2 | 128 | Ablated | Non-Autogressive | 32 | 16 | 1.0 |
| 4 | 20% | Val | 3M | 64 | 2 | 128 | 10 | Non-Autogressive | Ablated | 8 | 0.95 |
| 5 | 20% | Val | 3M | 100 | 2 | 128 | 10 | Non-Autogressive | 32 | Ablated | Ablated |
***
**Q2: Relevant ideas in related fields.**
**A2**: Thank you for mentioning the relevant ideas in related fields, such as the concurrently proposed multi-token prediction in LLMs and the KNN design of HPTR, which will be discussed in the revised version. However, using patch-wise tokens in decoder-only Transformers is a new attempt to our knowledge.
***
**Q3: Implementation Details.**
**A3**: All hidden sizes are set to 128. All attention layers have 8 attention heads with 16 dimensions per head. We train our models for 30 epochs with a batch size of 24 using the AdamW optimizer. The weight decay rate and the dropout rate are both 0.1. Using the cosine annealing scheduler, we decay the learning rate from $5 \times 10^{-4}$ to 0. Our main results are produced by a single model with 2 decoder layers and 3M parameters. Other details that you are interested in are discussed below.
**(1) Is it correct that you have as many RNNs as modes, with N_mode different parameters?**
The parameters of the RNN are shared across all modes.
**(2) Sampling strategy.**
We randomly sample one behavior mode from each agent's next-patch distribution until we complete the 8-second trajectories of all agents. To obtain 32 replicas of rollouts, we repeat this process using different random seeds.
**(3) Post-processing tricks.**
We do not apply any other post-processing tricks.
**(4) Is it correct that your input tokens are continuous?**
Yes, our input and output tokens are all continuous.
**(5) Do you run the patch generation in a receding horizon fashion?**
We try to increase the replan frequency by discarding a portion of the predicted states at each simulation step. The test set results produced by the model with a patch size of 10 are shown as follows. From the table, we can see that increasing the replan frequency from 1 Hz to 2 Hz can improve the overall performance, which may benefit from the enhanced reactivity. However, we found that an overly high replan frequency harms the performance, as indicated by the third row of the table. The results in Table 1 are based on 1-Hz replan frequency.
| Patch Size | Replan Frequency | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: |
| 10 | 1 Hz | 1.5405 | 0.7414 | 0.9308 |
|10 | 2 Hz | **1.4147** | **0.7473** | **0.9349** |
| 10 | 5 Hz | 1.5693 | 0.7342 | 0.9089 |
**(6) Why is the training so slow?**
Compared with typical encoder-decoder models, the sequence length in our approach is much longer. For example, most motion forecasting models on the Waymo Open Motion Prediction Benchmark utilize 1-second history to predict 8-second future trajectories, while our decoder-only Transformers simultaneously utilize 1-second, 2-second, ..., and 8-second history to predict the next patch during training.
**(7) Why do we use RNNs for the patch generation?**
We hope to comply rigorously with the closed-loop requirements of agent simulation via autoregressive RNNs. We have not tested a simple MLP head yet.
***
**Q4: What are the limitations?**
**A4**: (1) Currently, BehaviorGPT underperforms in kinematics-related performance (e.g., linear/angular speed likelihood), which can be enhanced by incorporating a kinematic model (e.g., bicycle model); (2) BehaviorGPT does not support controlling agent behavior with prompts (e.g., language, goal points). Future work on agent simulation may consider controllable generation; (3) We have not verified whether BehaviorGPT will facilitate the development of motion planning.
***
**Q5: Scaling experiments.**
**A5:** We try to conduct some scaling experiments under the constraints of time and computing budget.
(1) **Hidden size**: We vary the hidden size to obtain models with different parameters. The experiments use 20% of the training data and 2 layers of the Transformer decoder. As depicted below, increasing the hidden size consistently improves the performance.
| Hidden Size | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| 64 | 800K | 1.9637 | 0.7251 | 0.3369 | 0.9229 | 0.9056 |
|128 | 3M | 1.6247 | 0.7349 | 0.3546 | 0.9409 | 0.9163 |
|192 | 7M | 1.4993 | 0.7382 | 0.3646 | 0.9439 | 0.9185 |
(2) **Number of decoder layers**: We also try to fix the hidden size as 128 and vary the number of decoder layers, obtaining models by training on 20% of the data. Based on the experimental results below, we can conclude that increasing the depth of the models can benefit the performance.
| #Decoder Layer | #Param | minADE $\downarrow$ | Realism $\uparrow$ | Linear Speed $\uparrow$ | Collision $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| 1 | 2M | 1.7318 | 0.7319 | 0.3465 | 0.9319 | 0.9149 |
| 2 | 3M | 1.6247 | 0.7349 | 0.3546 | 0.9409 | 0.9163 |
| 3 | 4M | 1.5381 | 0.7387 | 0.3570 | 0.9450 | 0.9199 |
| 4 | 5M | 1.4881 | 0.7396 | 0.3633 | 0.9481 | 0.9207 |
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for addressing my comments, my main concerns are resolved and given that the material of the rebuttal is added I would like to increase my rating to a 7.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for the constructive suggestions! | Summary: This paper presents BehaviorGPT, a model for multi-agent traffic simulation. BehaviorGPT's architecture is based on a decoder-only transformer that autoregressively predicts patches of trajectories. The key insight is that predicting patches forces the model to learn longer-horizon reasoning. Then, to predict states from patches, BehaviorGPT uses an RNN that outputs a mixture of Gaussians/Laplacians for each agent's x, y, and heading. BehaviorGPT achieves state-of-the-art performance on the Waymo Open Sim Agents Challenge.
Strengths: - The proposed architecture is simple yet incorporates well-reasoned inductive biases like relative positional encodings and factorized self-attention. Despite its simplicity, BehaviorGPT achieves state-of-the-art performance on the Waymo Open Sim Agents Challenge. Moreover, BehaviorGPT does this with 10x fewer parameters than the previous state-of-the-art, Trajeglish.
- The paper ablates the efficacy of patch-based tokens, convincingly demonstrating that it is critical to having strong performance. To my knowledge, this design choice is novel within multi-agent traffic simulation and its adjacent fields.
- Code will be made available after publication.
- The paper is generally well-written and easy-to-follow.
Weaknesses: - The paper could be strengthened with stronger ablation studies to highlight the design choices that make BehaviorGPT outperform prior work, like Trajeglish. A number of design choices in BehaviorGPT set it apart from Trajeglish, but it is not clear which ones actually contribute to its ability to outperform Trajeglish even with significantly less parameters; e.g., is it the patch-based tokens? is it the factorized attention with relative positional encodings? is it the specific mixture of Gaussians/Laplacians output distribution? Such analysis would give the reader more insight into what design choices are important for multi-agent traffic simulation.
- The paper does not provide sufficient detail to reproduce the model; e.g., basic hyperparameters like the number of transformer layers. That said, this weakness is mitigated by the authors' intention to publish code upon acceptance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. During inference, how do you generate 32 rollouts? Do you randomly sample with a different seed 32 times?
2. What is the significance of not using ground truth state when unrolling the RNN? Do you have experiments to illustrate why it's necessary?
3. Can the authors discuss limitations of their method in a more meaningful manner than "performance can be better"?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are mentioned (in the checklist section only) but limited to "performance could be better". The paper can be improved with a more thorough discussion of limitations; e.g., what are the reasons why BehaviorGPT still underperforms in certain metrics on Table 1?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful comments and constructive suggestions. In the following, we attempt to address your critical concerns by giving more analyses and clarifying the implementation details.
**Q1: What makes BehaviorGPT outperform SOTA with fewer parameters?**
**A1:** Indeed, the most notable design choices that set BehaviorGPT apart from other autoregressive behavior models lie in three perspectives, including (1) the patch-based tokens, (2) the decoder-only Transformers based on relative spacetime representation, and (3) the use of continuous distributions. Let us understand these design choices better with more in-depth analyses.
(1) **Patching mechanism**: Our paper argues that the Next-Patch Prediction scheme can enhance models’ capability of long-range reasoning. In Table 3, we show that using a larger patch (e.g., increasing the patch size from 5 to 10) can improve the performance. However, some may doubt that the performance gain is merely due to the lower replan frequency (e.g., the 1 Hz for a patch size of 10 compared with the 2 Hz for a patch size of 5) as discovered by Waymo [30], which may enhance the temporal stability of trajectories. To clarify this, we try to increase the replan frequency by discarding a portion of the predicted states at each simulation step. The test set results produced by the model with a patch size of 10 are shown as follows. From the table, we can see that increasing the replan frequency from 1 Hz to 2 Hz can even improve the overall performance, which may benefit from the enhanced reactivity. This phenomenon demonstrates that the performance gain is not merely due to the lower replan frequency, as the model with a patch size of 10 beats that with a patch size of 5 even harder if using the same replan frequency. However, we found that an overly high replan frequency harms the performance, as indicated by the third row of the table. Overall, we conclude that using a larger patch indeed helps long-term reasoning, but a moderate replan frequency is important for temporal stability, which may be neglected by prior works.
| Patch Size | Replan Frequency | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: | :---: |
| 10 | 1 Hz | 1.5405 | 0.7414 | 0.9308 |
|10 | 2 Hz | **1.4147** | **0.7473** | **0.9349** |
| 10 | 5 Hz | 1.5693 | 0.7342 | 0.9089 |
(2) **Decoder-only Transformers with relative spacetime representation**: Decoder-only Transformers with relative spacetime representation can utilize training data more efficiently. Compared with encoder-decoder models where each scenario is treated as one pair of history and future, BehaviorGPT models each time step as the current one and requires each state to model subsequent states’ distribution during training, which is equivalent to constructing training samples from every possible history-future pair of the time series. To illustrate the sample efficiency of BehaviorGPT, we evaluate the models (5M parameters, hidden size = 128, #Decoder layer = 4) trained with different proportions of training data. The table below shows that using merely 20% of the data for training has already enabled the model to surpass many SOTA. The high data efficiency allows small models to achieve incredible performance.
| Training Data | minADE $\downarrow$ | Realism $\uparrow$ | Offroad $\uparrow$ |
| :--- | :---: | :---: | :---: |
| 20% | 1.4881 | 0.7396 | 0.9207 |
| 50% | 1.4060 | 0.7427 | 0.9250 |
| 100% | 1.3804 | 0.7438 | 0.9268 |
(3) **Choice of distributions**: Unlike standard autoregressive models, we choose continuous distributions for modeling since we prefer more end-to-end solutions. To model traffic scenarios with categorical distributions, we must first conduct discretization, which involves many essential implementation details (e.g., choosing the proper vocabulary size). Given that the code of Trajeglish is not publicly available, it is inappropriate to recklessly conclude something like "using continuous distributions is better" by comparing our model with a discretized variant that is not well-tuned. However, we indeed found that the choice of distributions is crucial. Prior to using Laplace distributions, we used Gaussian distributions for parameterization and noted that the model could not converge normally (see Figure c in the general response). Moreover, we observed that directly optimizing the full mixture models is better than using the winner-take-all training strategy (Realism score: 0.7396 vs 0.7146). These are the most critical things to make a continuous autoregressive model work well.
***
**Q2: What about the implementation details?**
**A2**: All hidden sizes are 128. All attention layers have 8 attention heads with 16 dimensions per head. We train our models for 30 epochs with a batch size of 24 using the AdamW optimizer. The weight decay rate and the dropout rate are both 0.1. Using the cosine annealing scheduler, we decay the learning rate from $5 \times 10^{-4}$ to 0. Our main results are produced by a single model with 2 decoder layers and 3M parameters. To obtain 32 replicas of rollouts, we randomly sample a behavior mode from each agent's next-patch distribution using different random seeds.
***
**Q3: What is the significance of not using the GT when unrolling the RNN?**
**A3**: We have tried a non-autoregressive GRU that does not rely on the predicted states when unrolling the next states, which can achieve lower minADE (1.5203 vs 1.6554) but performs worse on higher-order kinematic metrics (speed likelihood: 0.3517 vs 0.3544, acceleration likelihood: 0.2630 vs 0.2873).
***
**Q4: What are the limitations?**
**A4**: (1) The kinematic performance can be enhanced by incorporating a kinematic model (e.g., bicycle model); (2) BehaviorGPT does not support controlling agent behavior with prompts (e.g., language, goal points); (3) We have not verified whether BehaviorGPT will facilitate the development of motion planning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The authors have addressed most of my questions about the paper, particularly as it relates to my question regarding why BehaviorGPT outperforms the state-of-the-art. My only remaining suggestion is that the authors discuss the limitations of this paper in more depth in the camera ready; the current limitations remain surface-level discussions that do not contribute to the reader's understanding of BehaviorGPT's efficacy.
Considering this, I would like to maintain my rating and recommend this paper's acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback. We will add more discussions about this work's limitations as you suggested. | null | null | null | null |
Pre-training Differentially Private Models with Limited Public Data | Accept (poster) | Summary: This work theoretically justifies the loss of utility under DP pre-training under the lens of a Hessian. This theoretical result can then be leveraged to perform efficient pre-training of models on public data (small amount) to significantly improve DP-trained model's performance.
Strengths: The problem authors attempt to solve in this paper is very important and the work is timely (given the concerns re:usage of copyrighted material during pre-training of large-scale models). To the best of my knowledge, the results are theoretically justified and sound. There are many experiments supporting authors' claims in this work, outperforming the existing approaches.
Weaknesses: My fundamental issue with the positioning of this work is that we already know that public data improves DP - [1,2,3,4,6]. We also know that DP training during pre-training is harming utility more than it would have harmed the pre-trained model [2,3,4]. Therefore my question is what is the main scientific output of this paper? The conclusions are already widely established and I do not see any new outcomes that can be derived from the conclusions of this work. It would have been interesting to see if this result it linked to model's generalization/memorization capacities as described in [5], showing some novel contextualized understanding of the training dynamics, but there was no discussion on the interplay of the training time and memorization capacity under DP.
The theoretical justification of why this phenomenon arises is indeed interesting and can be useful, but on its own does not provide any novel insights into the training process that are 'actionable'. This is, again, because it was already previously established that a) DP pre-training harms utility [1,6,7], b) public data aids utility of DP models [2] and c) when models are untrained, they are more susceptible to various phenomena that can affect the training dynamics (e.g. loss of information via clipping [5]). So I am struggling to understand what can one gain from the results of this work? The 10% data threshold is a purely empirical result, which may or may not scale to other settings and datasets.
[1] - Bu, Zhiqi, Jialin Mao, and Shiyun Xu. "Scalable and efficient training of large convolutional neural networks with differential privacy." Advances in Neural Information Processing Systems 35 (2022): 38305-38318.
[2] - Kurakin, Alexey, et al. "Toward training at imagenet scale with differential privacy." arXiv preprint arXiv:2201.12328 (2022).
[3] - Nasr, Milad, et al. "Effectively using public data in privacy preserving machine learning." International Conference on Machine Learning. PMLR, 2023.
[4] - Ganesh, Arun, et al. "Why is public pretraining necessary for private model training?." International Conference on Machine Learning. PMLR, 2023.
[5] - Feldman, Vitaly. "Does learning require memorization? a short tale about a long tail." Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. 2020.
[6] - Mehta, Harsh, et al. "Large scale transfer learning for differentially private image classification." arXiv preprint arXiv:2205.02973 (2022).
[7] - Mireshghallah, Fatemehsadat, et al. "Differentially private model compression." Advances in Neural Information Processing Systems 35 (2022): 29468-29483.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can the framework be used to explain the reason behind DP being 'more difficult' for untrained models through the prism of memorization?
Is there a more theoretically sound way to define the threshold of public data that is required for appropriate performance?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As outlined above: while the theoretical justification of the results is indeed interesting, there are no novel conclusions our takeaways from this work that we have not previously had from similar works in the area of DP training.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. We will address them point-to-point and hope the reviewer can raise the score if satisfied.
> My fundamental issue with the positioning of this work is that we already know that public data improves DP - [1,2,3,4,6]. We also know that DP training during pre-training is harming utility more than it would have harmed the pre-trained model [2,3,4]. Therefore my question is what is the main scientific output of this paper? The conclusions are already widely established and I do not see any new outcomes that can be derived from the conclusions of this work.
We would like to highlight that the main scientific output is to answer "why DP training suffers from slow convergence?", whereas existing works mostly only observe that "DP convergence is slower than non-DP" without a fine-grained explanation. That is, knowing something happens is not the same as knowing why something happens, and scientific insights beyond empirical can only roots from the why. Our explanation via the trace of Hessian works for both pre-training and fine-tuning (our contribution 1; see Figure 1). We specifically separate the clipping and noising and emphasize that clipping is not troublesome but noising is (our contribution 2; again see Figure 1)! We then use our analysis to provide an actionable training strategy that only requires a small amount of pre-training data (which is very different from most works where most data are non-DP and only a fraction of finetuning data requires DP) and automatically switches to DP training (our contribution 3).
> The theoretical justification of why this phenomenon arises is indeed interesting and can be useful, but on its own does not provide any novel insights into the training process that are 'actionable'. This is, again, because it was already previously established that a) DP pre-training harms utility [1,6,7], b) public data aids utility of DP models [2] and c) when models are untrained, they are more susceptible to various phenomena that can affect the training dynamics (e.g. loss of information via clipping [5]). So I am struggling to understand what can one gain from the results of this work?
We are glad the reviewer finds our theoretical justification interesting. We would like to emphasize some actionable insights in our work: (1) The analysis in Section 3 shows that noising is the main cause of slow convergence. Therefore, we recommend a future direction to improve DP training by reducing the noise vector, instead of improving on the per-sample clipping. Some efforts may include parameter-efficient training (like LoRA), pruning and noise reduction (e.g. via tighter privacy accounting in Implication 2.3). (2) The Section 4 and all experiments rely on our continual pre-training strategy, where we monitor the loss to switch from non-DP to DP training (which implicitly uses the tr(H) information). (3) Given that tr(H) is worsening DP convergence, we may adopt sharpness-aware minimization and regularization to encourage the model to move in a flat region, so that DP convergence can accelerate.
> Can the framework be used to explain the reason behind DP being 'more difficult' for untrained models through the prism of memorization?
We focus on the training dynamics, which the memorization may not directly link to.
> Is there a more theoretically sound way to define the threshold of public data that is required for appropriate performance?
We show DP convergence is always going to be slower than non-DP, despite the slowdown can be insignificant after initial training, so the threshold will be subjective to "what level of slowdown is acceptable?". Generally speaking, if DP batch size is chosen properly (see Line 184), we should expect at most 2 times slowdown, and more non-DP pre-training brings down this factor of 2 to close to 1.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I would like to thank the authors for their responses! I still have some points I would like to note.
From the discussion with reviewer JCrd I am not sure I agree with your response to them:
> the extent of the bias is less significant and hence ignorable, compared to the effect of DP noising
This may very well be the case for the specific settings of your experiments, but I am not convinced this is a general 'rule' you can follow and claim that this bias is 'ignorable'.
> and noise reduction (e.g. via tighter privacy accounting in Implication 2.3)
When I said 'actionable results', telling the community to use better DP tools (i.e. tighter accountant) is not really something actionable from YOUR results (i.e. we would always use the tightest accounting all else aside anyway).
After reading the rest of the response, I am keeping my score unchanged.
---
Rebuttal 2:
Comment: Thank you for joining the discussion! We understand your concern that the clipping bias may be non-ignorable in some settings that we haven't covered. From our experience with DP model training in all projects that consist of hundreds of experiments, we always observe that "clipping without noising" gives similar performance to non-DP empirically. Nevertheless, we can not rigorously claim this regardless of how many settings/experiments we test. Hence we would state "this analysis only approximates the scenario where the clipping introduces ignorable bias, e.g. when the clipping threshold is large or when unbiased clipping is used (c.f. 'Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach')".
Additionally, we highlight that our connection between tr(H) and noising (to which we attribute the slowdown of DP) still holds even if we take the clipping bias into consideration: Equation (5) will become
$$\Delta L_\text{priv}=\eta\hat{G}^\top G-\frac{\eta^2}{2}\left(\hat{G}^\top H \hat{G}+\frac{tr(H\hat{\Sigma})}{B}+\frac{\sigma^2 tr(H)}{B^2}\right)$$
where the hat stands for **biased gradient** from per-sample clipping.
Regarding the actionable items specifically from our results, we would recommend noise reduction like low-pass filtering (we are working on it) and sharpness-aware minimization (to reduce tr(H), known as the sharpness). It would also be desirable to experiment our DP continual training on more settings such as NLP. Happy to extend the discussion!
Given that this work is on the borderline, we would appreciate it if the reviewer can consider raise the score. | Summary: This paper addresses the challenge of DP pretraining, which has been limited due to performance degradation. The authors first analyze per-iteration improvements of the optimization process and then propose a novel DP continual pre-training strategy that uses a small amount of public data to mitigate the negative effects of DP noise. They then empirically demonstrate the effectiveness of their approach on large-scale vision models.
Strengths: - Conclusions are well supported by figures and illustrations.
- The framework of analyzing improvements appears novel to me.
- The experimental evaluations are extensive, and the improvements are significant.
Weaknesses: - The authors claim that *a certain amount of public data can mitigate deceleration*. However, Section 4 lacks discussion about deceleration. The given claim is that per-iteration improvement is increased, which is obvious given public data. Also, it's unclear why *limited* public data is emphasized, as the analysis does not involve public sample sizes. This makes the article somewhat incoherent, as the interesting theoretical analysis in Section 3 does not provide strong suggestions for the methodology.
- The experiments lack some ablation studies to support the key methodological proposal.
- Citations should be updated to published versions rather than arXiv preprints where possible.
- In line 14, there's an extra "and".
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall, this paper is interesting. I am happy to raise my score if some of the questions are addressed.
- Can the authors provide some comments on the first weakness?
- Section 3.3 is somewhat confusing. First, Remark 3.1 investigates the differences between pre-training and fine-tuning by comparing $B_{non-DP} G^{\top} H G$ and $tr(H\Sigma)$, while in fact, public per-iteration improvement is monotonic with respect to $B$. Why use a comparison between $B_{DP}$ and $B_{non-DP}$ for deriving the explanation? Also, can the authors more rigorously explain what is meant by "data-efficient"?
- Moreover, the difference between public improvement (7) and private improvement (6) lies in the decelerator $\sigma^2 tr(H) / (Bc^2)$, which seems smaller for pre-training where the loss landscape is flatter and thus the curvature $tr(H)$ is smaller. What is wrong with this intuition?
- It is observed that prediction accuracy is linear in $\log B$, for instance in Figure 4 in [1], while the statement here is that $B$ should be moderately chosen. Though these are not the same quantity, could the authors comment on the impact of $B$ on DP-SGD?
- Based on analysis in Section 3, is there any off-the-shelf rule for choosing $B$?
- Can the authors provide some ablation studies (can be on toy datasets) on the impact of $s$?
[1] Tom Sander, Pierre Stock, and Alexandre Sablayrolles. Tan without a burn: Scaling laws of dp-sgd. In International Conference on Machine Learning, pages 29937–29949. PMLR, 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations should appear in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and the comments. We will address them point-to-point and hope the reviewer can raise the score if satisfied.
> The authors claim that a certain amount of public data can mitigate deceleration. However, Section 4 lacks discussion about deceleration. The given claim is that per-iteration improvement is increased, which is obvious given public data. Also, it's unclear why limited public data is emphasized, as the analysis does not involve public sample sizes. This makes the article somewhat incoherent, as the interesting theoretical analysis in Section 3 does not provide strong suggestions for the methodology.
Indeed, we agree with the reviewer that we do not provide a theoretical characterization of "limited public data", whereas the "limited" is only emphasized from a practical standpoint. In Section 4, we have leveraged our knowledge about the deceleration in Equation (9) to suggest the mixing ratio of public and private gradient, and to motivate the DP continual pretraining that empirically mitigates the deceleration by achieving higher utility. We emphasize the "limited public data" because our mitigation of deceleration is so effective that we use much fewer public data than other DP training strategies (see right-most column in Table 2).
> The experiments lack some ablation studies to support the key methodological proposal.
We have ablation studies over privacy budgets (multiple epsilon), few-shot or full settings, task difficulties (from CIFAR10 to Places365). We are happy to discuss more ablation studies if the reviewer can be more specific.
> Citations should be updated to published versions rather than arXiv preprints where possible.
We complete agree and will update in the camera-ready (NeurIPS does not allow revision this year).
> Section 3.3 is somewhat confusing. First, Remark 3.1 investigates the differences between pre-training and fine-tuning ..., while in fact, public per-iteration improvement is monotonic with respect to B. Why use a comparison ... for deriving the explanation? Also, can the authors more rigorously explain what is meant by "data-efficient"?
Here "data-efficient" means the per-sample improvement is high: the same 0.1 improvement obtained by 10 data samples is much more data-efficient than obtained by 1000 samples. While public per-iteration improvement is monotonic, the rate of improvement can be different. For example, if we have the (batchsize, loss improvement) pairs: (B=1, 0.1), (B=2, 0.08), (B=10, 0.07), (B=1000, 0.0699). Hence, in Remark 3.1, we stated that non-DP batch size should be small to be data-efficient.
> Moreover, the difference between public improvement (7) and private improvement (6) lies in the decelerator, which seems smaller for pre-training where the loss landscape is flatter and thus the curvature is smaller. What is wrong with this intuition?
We are glad the reviewer points this out! We agree that tr(H) is small initially as we also demonstrate in Figure 6. But it quickly increases to a large value (within 5 epochs) for a long time (say epoch 5 to 40) before it decreases again. Therefore, even though DP training is fast initially, it is only fast for a short period and overall DP is much slower than non-DP. This is also observed in Figure 1 (a)(c). We will add this discussion to the camera-ready paper.
> It is observed that prediction accuracy is linear in log(B), for instance in Figure 4 in [1], while the statement here is that B should be moderately chosen. Though these are not the same quantity, could the authors comment on the impact of B on DP-SGD?
We believe B should be moderately chosen. However, the optimal B may be very large and hence, for a range of batch sizes smaller than the optimal B, it is not wrong empirically that increasing B could improve convergence. Nevertheless, we notice [1] does not explicitly consider adjusting the learning rate for different batch sizes. Therefore, the conclusion there does not hold in our setting as we also use optimal learning rate (see Equation (5)) in this work. Notice that in Equation (5), if eta learning rate is independent of B, then it is monotonically decreasing in B. However, in theory, and in practice, the learning rate depends on B, and thus Equation (5) is NOT monotonically decreasing in B, as we demonstrate in Figure 5.
> Based on analysis in Section 3, is there any off-the-shelf rule for choosing B?
Unfortunately, choosing B is not easy, even for non-DP training.
> Can the authors provide some ablation studies (can be on toy datasets) on the impact of s?
In Figure 7, we have three values of s: s=0 is fully private training (red curve); s=1 is fully non-private training (black curve); automatic s (switch point and blue curves) around 25\%. We also use automatic s (around 10\%) in all experiments in Section 5. In short, if we plot utility against s, we will observe a sharp trade-off that small s (not too small) suffices to pick up most utility.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed clarification and I have updated my score. | Summary: This paper proposed a Hessian-based analysis of the per-iteration loss for DP-SGD and non-private training, and provide a theoretical explanation for slower convergence of differentially private training. Authors identified the *decelerator* compontent in per-iteraton loss improvement, associated with the gradient clipping and noising, providing a theoretical backing for differences between DP impact on pre-training vs. fine-tuning. Using the same framework, authors also suggest the derivation for the optimal batch size for private training under the fixed computation budget (i.e. fixed number of samples processed), which strikes the balance between lower per-sample noise and training slowdown associated with growing batch size.
Based on this analysis above, authors suggest the impact of DP on convergence can be mitigated by public pre-training data. While using public pre-training data to complement DP fine-tuning has been explored before, authors emphasize the difference between fine-tuning and continual pre-training, provide a theoretical analysis for the optimal mixing proportion for private/public data, and evaluate their models after private continual pre-training on a downstream fine-tuning tasks.
Strengths: I believe this is a strong and well-written paper with valuable contributions to the field.
* The paper is very clearly written, with all major claims supported by arguments, derivations, and graphs. It's easy to follow and it gets the point across.
* The derivation of the decelerator component of the per-iteration loss is a valuable contribution. It provides a good explanation for the DP-SGD training dynamics, and provides a theoretical backing for the previously observed empirical results. Specifically Fig. 6 provides very interesting insights intro DP-SGD training which I haven't seen analyzed before.
* The downstream findings (e.g. optimal batch size or optimal ratio of private/non-private data) could be impactful for the practical applications of DP-SGD.
Weaknesses: The major weakness of the paper (at least its theoretical part) is the assumption that per-sample gradient clipping does not introduce bias into the gradient approximation (line 116). This is a very strong assumption, and something that, to the best of my knowledge, is not generally accepted in the field (e.g. see [this paper](https://proceedings.neurips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-Paper.pdf)). To justify the assumption authors point to Fig. 1, which to me doesn't look like it supports their claim - there's a significant difference between vanilla optimizer and optimizer + clipping in pre-training.
Additionally, the analysis in this paper assumes the oracle knowledge of matrices G (gradient expectaition) and H (Hessian). For practical applications where this is infeasible, it would be useful to look at, e.g. optimal batch size not only from an optimal loss perspective, but also take into account how well the batch gradient approximates the actual gradient G.
Oh the evaluation side, I see a slight disconnect with the theoretical results. For instance, after the results in Sec. 4.1 it would be natural to explore different mixing ratios of public/private data in training - however authors only focus on a setup with fixed pre-training and continual training datasets.
The results presented in Tables 3,4 and 5 do make a good case that the proposed approach is valid, but lack proper baselines. Comparisons are made either with non-private models, models trained on different dataset, or model trained for different number of epochs - contradicting the scenario with fixed compute budget.
Authors do not report compute resources used for the experiments, which is especially relevant for reproducibility of the paper's results, as it works with Hessian matrices which can be very computationally expensive to compute.
Technical Quality: 3
Clarity: 3
Questions for Authors: * I think the formatting for Fig.5 is wrong - in the text you refer to "upper left" plot, while in the submitted version it's rendered as a single line of 4 plots
* I would be interested in reading how did you compute the data for Fig.6 - did you explicitly compute full matrices `G` and `H` or used some approximations?
* I don't fully understand why in Fig.5 (pre-training) the blue dashed line is linear, suggesting that $G^THG$ is constant.
* In Table 2, a) does number of images include continual pre-training? and b) what does "non-privacy" column refer to?
* What is the criteria for distinguishing continual pre-training from fine-tuning? For example, in your experiments you perform continual pre-training with a different objective than earlier pre-training (supervised vs non-supervised). Does it not justify the "fine-tuning" term?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors do not discuss limitations explicitly, but are very upfront about assumptions they make (e.g. fixed compute budget, assuming no bias from clipping, etc). Authors also briefly mention limitations in the checklist (but not the main body of the paper).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and comments. We will address them point-to-point. The answers to some questions are merged in the response to the weaknesses.
> The major weakness of the paper (at least its theoretical part) is the assumption that per-sample gradient clipping does not introduce bias into the gradient approximation (line 116). This is a very strong assumption, and something that, to the best of my knowledge, is not generally accepted in the field (e.g. see this paper). To justify the assumption authors point to Fig. 1, which to me doesn't look like it supports their claim - there's a significant difference between vanilla optimizer and optimizer + clipping in pre-training.
We agree that gradient clipping definitely introduces bias to the gradient approximation, but the extent of the bias is less significant and hence ignorable, compared to the effect of DP noising (see the closeness of blue and black curves, and the remarkable distance between yellow and black curves, especially for GPT2). We are using this assumption to prioritize the analysis of noising (which leads to trace of Hessian and decelerator), which is necessary to simplify our analysis.
> Additionally, the analysis in this paper assumes the oracle knowledge of matrices G (gradient expectaition) and H (Hessian). For practical applications where this is infeasible, it would be useful to look at, e.g. optimal batch size not only from an optimal loss perspective, but also take into account how well the batch gradient approximates the actual gradient G.
We agree that oracle G and H is not available (as we marked in footnote 2) and we will extend the discussion on the selection of batch size.
> On the evaluation side, I see a slight disconnect with the theoretical results. For instance, after the results in Sec. 4.1 it would be natural to explore different mixing ratios of public/private data in training - however authors only focus on a setup with fixed pre-training and continual training datasets.
Thanks for pointing out this precious suggestion. To clarify, the ratio is about the re-weighting of gradients. The reason we didn't experiment with other mixing ratios is in Remark 4.2, as these methods are hard to implement and not scalable, and most do not have open-source code.
> The results presented in Tables 3,4 and 5 do make a good case that the proposed approach is valid, but lack proper baselines. Comparisons are made either with non-private models, models trained on different dataset, or model trained for different number of epochs - contradicting the scenario with fixed compute budget.
We agree the comparisons are not perfectly comprehensive and it is computationally expensive to explore them. E.g. VIP is trained on Shaders21k (hence we will double our computation budget at least) and NFnet is trained on JFT (which is not publicly available). We hope the reviewer would agree that this is acceptable as we follow the same experiment setup as in VIP, and the utility of our method is clear even though we use much less compute.
> Authors do not report compute resources used for the experiments, which is especially relevant for reproducibility of the paper's results, as it works with Hessian matrices which can be very computationally expensive to compute.
We are using 1 A100 GPU and we will release the trained models for reproducibility. We actually never computed the Hessian matrix because we only need tr(H), which can be computed via Hutchinson method (briefly mentioned in Line 820): we sample 50 random vectors $z$ and compute $zHz$ as a scalar and then average over 50. Here $zHz$ is computed from finite difference of losses. Please let us know if more details are desired.
> I don't fully understand why in Fig.5 (pre-training) the blue dashed line is linear, suggesting that GHG is constant.
In Figure 5 the illustration is for one iteration so GHG is a constant and B is the variable. GHG does change over iterations as we show in Figure 6.
> In Table 2, a) does number of images include continual pre-training? and b) what does "non-privacy" column refer to?
a) Yes. We stated in caption that it is "the total number of images". b) DP is defined specifically on dataset. If we train on data A non-privately and then on data B privately, then only B has privacy guarantee and A is indicated in "non-privacy". We will clarify this in the camera-ready.
> What is the criteria for distinguishing continual pre-training from fine-tuning? For example, in your experiments you perform continual pre-training with a different objective than earlier pre-training (supervised vs non-supervised). Does it not justify the "fine-tuning" term?
In this work, we consider fine-tuning to be a) on a specific (and usually much smaller) dataset so that some techniques like parameter efficient finetuning (e.g. LoRA) is applicable because the change in parameters are small. However continual pre-training is on a large amount of data for a series of downstream tasks and LoRA won't work. b) the second dataset has a distribution shift, e.g. many DP papers publicly pretrain on ImageNet and finetune on CIFAR10. However, the continual pre-training mostly uses a similar dataset, e.g. we pretrain on 10\% of ImageNet and continue the training on the other 90\%.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal.
> We agree that gradient clipping definitely introduces bias to the gradient approximation, but the extent of the bias is less significant and hence ignorable, compared to the effect of DP noising <...> We are using this assumption to prioritize the analysis of noising (which leads to trace of Hessian and decelerator), which is necessary to simplify our analysis.
I understand the author's point to prioritize the noise over clipping in their analysis. I agree it is justifiable and provide a foundation for the important theoretical analysis presented in the paper. I would not, however, call the clipping bias "ignorable" - I would argue that it remains an important direction for future research to understand the DP training dynamics.
I also appreciate clarifications provided, they were useful for me to understand the paper better.
Overall, I believe it's a strong paper and choose to maintain my high score.
---
Reply to Comment 1.1.1:
Comment: We agree "ignorable" is over-stating and that clipping bias is indeed important to DP training. We will surely add the clarification in next revision. | Summary: This paper provides a theoretical framework to analyze impact of various aspects (parameters) of DP training on the performance of resulting models. The framework uses hessian of per-sample gradient to compute per-iteration loss improvement. Using the framework, the paper shows how DP impacts performance of model in pre-training more than during fine-tuning, and suggests using public data pre-training as a remedy. Finally, based on the observations, the paper proposes a DP continual learning and shows how it can help improve the performance of upstream and downstream tasks.
Strengths: - Approach of the proposed framework can be useful to analyze many DP settings.
- Interesting conclusions especially Implication 2.4 about batch size
- Proposed continual learning approach seems practically useful in privacy sensitive settings
- Few shot accuracy of the proposed approach is impressive
Weaknesses: - Some assumptions, especially about clip norm, need more clarification
- Some of the theoretical claims could be paired with empirical evidence
- Proposed continual pre-training approach needs better explanation
Technical Quality: 2
Clarity: 2
Questions for Authors: - It looks like the conclusions made in the paper rely heavily on the assumption about clip norm multiplier being the same for all per-sample grads. The assumption is because in Figure 1 performances of models with and without clipping are similar. But, why would this be always true? If clipping is very aggressive, it will introduce bias and performance will reduce much more than what we see in Figure 1. Maybe it is good to clarify the scope of the assumption.
- In the “Per-iteration improvement of DP-SGD” it says that parameter updates are generally small. When is this true in practice?
- In Table 3, why do performances of many fine-tuning cases reduce from DINO to Ours(eps=2)? I think the performance should always improve? At the same time, few-shot performance of proposed method is significantly higher than DINO in all cases. Can you clarify this difference?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: - Assumptions made in the work to set up the theoretical framework need better clarification in terms of the scope of theory.
- It would be good to formally write an algorithm of DP continual pre-training, as I found section 4.2 a bit confusing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and comments. Given that the weaknesses are mostly about clarification and explanation, we are happy to address them if the reviewer can be slightly more specific, besides the questions which we address below.
> It looks like the conclusions made in the paper rely heavily on the assumption about clip norm multiplier being the same for all per-sample grads. The assumption is because in Figure 1 performances of models with and without clipping are similar. But, why would this be always true? If clipping is very aggressive, it will introduce bias and performance will reduce much more than what we see in Figure 1. Maybe it is good to clarify the scope of the assumption.
We clarify that the clipping norm multiplier is NOT the same for all per-sample grads and this would NOT be always true. We explicitly state in Line 115-116 that "This approximation **only holds when** the directions of vectors ... are very close, i.e., there is **little** per-sample clipping bias." Note little bias must be distinguished from no bias. In fact, we are using the most aggressive clipping, i.e. the normalization which means clipping threshold is infinitely close to 0, but Figure 1 shows the impact of clipping is much less significant than the impact of noising. In short, clipping does has a bias, but this bias is sufficiently small so that the approximation is sufficiently accurate and informative.
> In the “Per-iteration improvement of DP-SGD” it says that parameter updates are generally small. When is this true in practice?
We thank the reviewer for this question. Here are a few cases where the parameter updates are generally small in deep learning: (1) the learning rate is small because the update is a multiplication of learning rate and gradient; or (2) the model size is large so the training experiences a phenomenon known as lazy training (especially in the neural tangent kernel regime); or (3) weight decay is applied so the parameters are within a ball around the initialization and the updates are bounded.
> In Table 3, why do performances of many fine-tuning cases reduce from DINO to Ours(eps=2)? I think the performance should always improve? At the same time, few-shot performance of proposed method is significantly higher than DINO in all cases. Can you clarify this difference?
We confirm that on the pretraining dataset ImageNet, the performance indeed always improve with DP continual training, including Ours(eps=2). Therefore the inconsistency may arise from the dataset distribution shift, i.e. some datasets are less similar to ImageNet so the improvement does not transfer well. We will add this discussion as a future direction!
> It would be good to formally write an algorithm of DP continual pre-training, as I found section 4.2 a bit confusing.
We haved added it in the rebuttal Appendix D.
---
Rebuttal Comment 1.1:
Title: Thanks for the response!
Comment: Thanks for the response. Rebuttal clarifies my questions and concerns, and i have raised my score accordingly. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the comments and put every effort to address them. Please let us know if there are further questions (though revision is not allowed this year). We provide the algorithm of DP continual pretraining in PDF here (Appendix D).
Pdf: /pdf/daf3ed256bb2eaf287e9201b745c7d6d7185ea16.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Differentially Private Set Representations | Accept (poster) | Summary: The paper considers private representations of sets (under the neighboring relation of adding/removing an element). The goal is to answer set membership queries correctly with some nontrivial 1-alpha probability of being correct. (For a universe of size 1 this can be addressed by randomized response, so this problem generalizes releasing a single private bit.) Of course, the privacy guarantee will need to depend on alpha. Previous related work considered histograms (or multisets) and private versions of Bloom filters, but the performance of these past approaches on set membership queries has not been studied. The paper takes an interesting new angle, using ideas from space-efficient filters to get differentially private set representations using small space. It also gives lower bounds that nearly match the upper bound. Experimental comparisons to existing techniques suggest a better privacy-utility trade-off with comparable time usage.
Strengths: Strengths:
- Studies an interesting special case of a well-studied problem (private histograms) and gives better guarantees for this special case
- An interesting generalization of randomized response to a "sparse" setting
- Literature review is comprehensive and well-written
- The techniques are interesting and novel
Weaknesses: Weaknesses:
- It took me a while to understand why the results hold, the writing could be better
- It seems that the lower bound in Theorem 1.4 cannot possibly hold as stated (see questions below)
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you make the privacy proof clearer? I think there are subtleties that should be discussed in the *main part* of the paper. For example, it would be good with a discussion of the probability of returning (⊥,⊥,⊥,S) when the linear system does not have full rank. This seems like a blatant privacy violation, though it happens only with probability 𝛿, so does not contradict approximate DP.
- Are there not some lower order terms missing in the space usage stated in Theorem 1.1 and 1.2? It seems that if you need to choose the field size a bit larger than 1/alpha, more bits per entry will be needed. Also, if the field size is at least e^epsilon it seems that each entry requires ceiling(epsilon/ln(2)) bits to encode.
- If you do not sample (i.e., use p=1), could you apply randomized response to the vector b before solving the linear system to achieve DP?
- If you return (⊥,⊥,⊥,S) with probability 𝛿 even when the linear system is solvable, does it yield pure DP?
- In some places (e.g. line 470) it is stated that the field size is 1/alpha. I believe you simply need it to be *at least* 1/alpha, as stated in line 261?
- Theorem 1.4 seems to be missing a condition that alpha is not too close to zero (or that epsilon is not too large)? The proof technique in the appendix does not seem able to show any space bound larger than the entropy of a random set of k elements from a universe of size n.
- Can you confirm the limitation mentioned below?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: It should be made clearer that the DP guarantee relies on the hash functions being fully random while the space guarantees relies on the hash functions being pseudorandom. It would perhaps be more correct to state the results as computational DP.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer ys34 for their valuable and comprehensive feedback.
In response to the reviewer's suggestion, we will improve the clarity of the privacy proof and incorporate additional context within the main body of the paper. We acknowledge the importance of explicitly explaining why returning $(\perp, \perp, \perp, S)$ is acceptable when the linear system cannot be solved, and why this does not violate the definition of approximate differential privacy. It is worth highlighting that returning $(\perp, \perp, \perp, S)$ is only a matter of convenience to simplify the privacy proof. In practice, alternative measures can be taken to prevent the release of the set in plaintext. For example, the algorithm could retry with newly sampled $h$ and $\mathsf{Row}$ until solving the linear system succeeds.
The reviewer is correct with our Theorem 1.1 and Theorem 1.2 missing lower order terms. The space usage of the approximate DP scheme should instead be roughly ~$1.05 \cdot k \cdot \epsilon \cdot \lg(e)$ bits, and roughly ~$k \cdot \epsilon \cdot \lg(e)$ bits for the pure DP schemes. We thank the reviewer for pointing this out, and we promise to fix it in the next iteration of our paper.
The reviewer also raised an interesting question on the possibility of using the randomized response on the RHS vector instead. In fact, we have explored this direction as well, but we encountered some challenges with either the query correctness probability and/or with the privacy proof. However, we agree that this is an interesting modification to the algorithm and worth exploring further in the future.
The reviewer is correct that the field size just needs to be at least $1 / \alpha$ instead. We will make this clearer in the next iteration of our paper.
We appreciate the reviewer's observation that Theorem 1.4 may not hold true when $\alpha$ is sufficiently small. Our current lower bound states that, in such cases, we would require more than $\log \binom{n}{k}$ bits, which is evidently incorrect - we can clearly represent any $k$-subset with 0 error probability with $\log \binom{n}{k}$ bits. To address this, we will modify the theorem statements to instead require at least $\min(\Omega((1 + \delta / e^\epsilon) \cdot k \cdot \log(1 / \alpha)), \log \binom{n}{k})$ bits. This correction will be incorporated in the next iteration of our paper. It is worth noting that our lower bound proof explicitly makes this assumption on Line 537 (but does not state in the theorem statement), and so our lower bound result is not invalidated.
Finally, the reviewer is correct with the limitation of our work. Indeed, our DP guarantee and the space usage do rely on the hash functions being fully random and/or pseudorandom. If we assume pseudorandom hash functions (PRFs), we indeed obtain computational DP.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I don't have further questions | Summary: The paper studies the problem of optimal differentially private representing a set $S$ with size $k$ on universe $U$, under the setting where the set size $k$ is significantly smaller than the universe size $|U|$. In such settings, the authors propose algorithms to compute $(\varepsilon, \delta)$-DP set representation with error probability $\frac{1}{e^\varepsilon + 1}$, while only consuming $O(k\varepsilon)$ space and $O(k\log(1/\delta))$ encoding/decoding time. The algorithm is based on representing the set via solutions to linear equation systems formed by computing random hash functions on each set member. The authors further prove that under $(\varepsilon, \delta)$-DP, their upper bound matches lower bounds in both error probability and space requirement, up to a factor of $\log(1/\delta)$.
Strengths: - The paper has a clear presentation, and the results are relatively complete in matching lower and upper bounds.
- The approach of encoding set membership via solution to linear equation systems is interesting, and its application to differentially private set representation is novel to the best of my knowledge
Weaknesses: - I find that one part of the proof for error probability needs more clarification. Specifically, in lines 465-466, the authors write "the false negative probability is s not sampled into S′ and the linear constraint being unsatisfied that is $p(1-F^{-1})$". However, to my understanding, this probability should be $(1-p)(1-F^{-1})$. Maybe the authors could clarify more whether this is a typo and whether it affects any results.
- The proof of error probability $\alpha$ requires the existence of a $1/\alpha$-finite field, which limits the applicability of the proposed algorithm for small $\varepsilon$. For example, the algorithm's error upper bound appears to be identical for all $\varepsilon \leq \ln(2)$, as under such settings the smallest finite field that is larger than $1 + e^{\varepsilon}$ is always $Z_3$.
- Executing the algorithm requires knowledge of an upper bound of the set size $k$, as the algorithm requires to form an underdetermined linear equation system. This may significantly restrict the applicability and optimality of the proposed algorithm when no such knowledge is available.
Technical Quality: 2
Clarity: 3
Questions for Authors: Besides the points listed in the weakness, I have the following questions related to novelty and presentation.
- Is the non-DP variant of the set representation algorithm in the paper proposed in prior literature, or is it a novel algorithm? This is to understand the novelty of the proposed algorithm.
- In Theorem 1.1 and 1.2, the pure DP encoding time is $O(k(\log k)^2)$ while the $(\varepsilon, \delta)$-DP encoding time is $O(k\log(1/\delta))$. Considering that typically we have $\delta\ll 1/k$, this suggests that the pure DP algorithm needs smaller encoding time, which is a bit counterintuitive as pure DP is a stronger requirement. Could the authors clarify why this happens?
- The decoding Algorithm 2 appears to require new computation for each element of the universe $U$. However, in Theorem 1.1 and 1.2, the decoding time only grows with set size $k$ and does not depend on the universe seize $|U|$. Could the authors clarify this discrepancy?
Minor typo:
- line 221, $n \geq k$ should be $m \geq k$, "full rank" should be "full column rank"
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer VHxw for their valuable and constructive feedback.
We will first address the reviewer's question regarding the error probability. We think that the proof presented in our paper is accurate. We believe the misunderstanding may arise from the way the pseudocode in Algorithm 1 is structured. In Algorithm 1, we insert each element of set $S$ into set $S'$ with a probability of $1 - p$. Note that this is equivalent to removing each element from $S$ with a probability of $p$ and considering the resulting set as $S'$. Consequently, the probability of a false negative is indeed $p(1 - 1 / |F|)$, because a false negative can only occur if the element is not present in $S'$ (e.g. removed) and the false positive does not occur for this removed element. We will try to rewrite the pseudocode to make this clearer.
The reviewer raised a question if the non-DP set representation algorithm in our paper had been discussed in earlier research. To clarify, some of the linear systems in our paper have been used to construct non-DP sets while others have not. Random band matrices have been studied as non-DP sets (we mention this in the first paragraph of Section 3). On the other hand, Vandermonde matrices have never been used to build non-DP sets to our knowledge. However, we want to emphasize that the novelty of our work lies in our observation that any linear system with certain properties can be modified to produce differentially private set representations. Our work further proposes a general DP framework based on these linear systems. We believe this observation is highly significant and non-trivial.
We note that there are also non-DP set data structures (like cuckoo filters) that are not directly compatible with our framework, indicating that our observation is non-trivial. To our knowledge, we are unaware of any method to convert them into DP sets. In particular, there are certain properties unique to the linear system set data structures that we rely on to ensure DP privacy guarantees.
The reviewer also requested clarification regarding the encoding runtimes of the pure DP and the approximate DP algorithms. Indeed, the results may appear counterintuitive. The distinction between the two algorithms lies in how they solve linear systems.
For the pure DP algorithm, the Vandermonde matrix's structure allows for the use of an FFT-like algorithm to solve the linear system efficiently. However, this approach is not feasible for the random band linear systems in the approximate DP algorithm, and we thus use the Gaussian elimination instead. While Gaussian elimination typically takes $O(n^3)$ on a general linear system, we emphasize again that the random band linear system’s unique structure allows the linear system to be solved very efficiently n $O(n \log(1/\delta))$, both theoretically i and practically.
It is worth noting that this difference in matrix structure also impacts the decoding times. The pure DP decoding algorithm has a time complexity of $O(k)$, while the approximate DP decoding algorithm has a time complexity of $O(\log(1 / \delta))$. In this respect, the pure DP construction offers pure differential privacy and efficient encoding at the cost of a slower decoding time.
In relation to the reviewer's third question, we emphasize that the decoding time of $O(k)$ and $O(\log(1 / \delta))$ applies to decoding/querying a single element. Consequently, if we query $T \subseteq U$ elements, the total decoding time becomes $O(|T| \cdot k)$ and $O(|T| \cdot log(1 / \delta))$, respectively. Our formulation of the decoding time is consistent with previous studies on the differentially private $k$-sparse vector problem (e.g. Table 1 in [1]). In these works, the decoding time is expressed in terms of the time required to access a single entry of the $k$-sparse vector. However, we acknowledge that the term “decoding” can be ambiguous, and so we will change it to “access” or “query” instead in the next iteration.
[1]: https://arxiv.org/pdf/2106.10068
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply, which partially addresses my concerns, except for weaknesses 2 and 3. Thus I will keep my rating as is currently. | Summary: The paper addresses the problem of releasing a set $S $ of $ k $ elements from a potentially very large universe $ U $ in a differentially private manner. Here, two input sets $ S $ and $ S' $ are considered neighboring if their symmetric set difference is at most one; that is, $ S $ and $ S' $ differ by adding or removing exactly one element.
The objective is to publish a concise representation of the elements in $ S $ that allows determining whether an element in $ U $ belongs to $ S $, while minimizing both false positives and false negatives. The paper introduces new algorithms for constructing succinct representations using solutions from random linear systems based on the elements in $ S $.
The $(\epsilon, \delta)$-differentially private construction achieves an error probability of $\frac{1}{e^\epsilon + 1}$, uses space of at most $1.05k\epsilon$ bits, has an encoding time of $O(k \log(1/\delta))$, and a query time of $O(\log(1/\delta))$ per element.
On the other hand, the $\epsilon$-differentially private construction maintains the same error probability, uses space of at most $k\epsilon$ bits, but requires $O(k)$ query time per element. The space usage of both constructions matches the proposed lower bound up to constant factors.
Strengths: 1. The constructions appear novel. Rather than directly randomizing set $ S $, the paper utilizes elements from $ S $ to establish random linear constraints and publishes a solution that satisfies these constraints (demonstrating its existence).
2. The space usage aligns with the lower bound.
3. The algorithms demonstrate efficiency: both approaches exhibit slightly more than linear encoding time, and the approximate differentially private algorithm achieves $ \tilde{O}(1) $ query time.
Weaknesses: It seems there's a concern regarding the privacy analysis in the paper, specifically related to the use of the set $ S $ of size $ k $ to generate random linear constraints. Let's clarify and address the issues raised:
1. **Number of Linear Constraints $ m $**: It's mentioned that $ m = (1 + \beta) k $ for some constant $\beta$. This implies that the number of constraints $m$ depends on $k$.
2. **Dimension of Published Solution $ x $**: The solution $ x $ that is published has a dimension of $ m $, which is dependent on $ k $ as discussed.
3. **Privacy Concern**: Given the definition of neighboring datasets (differing by adding or removing one element from $ S $), an adversary could potentially distinguish between these datasets by observing the dimension of the output vector $ x $. This suggests a potential privacy vulnerability if the dimension of $ x $ reveals information about the dataset $ S $.
To address this issue, possible fixes might impact current the error probability and subsequently affect the space usage analysis.
Technical Quality: 1
Clarity: 3
Questions for Authors: Can the embeddings proposed in this paper be mergable? For example, if we have two representations $ x_S $ and $ x_T $ created by the algorithms in this paper for sets $ S $ and $ T $, can we directly create a representation $ x_{S \cup T} $ for $ S \cup T $ using $ x_S $ and $ x_T $?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 1hbz for their constructive feedback.
The reviewer raised a concern about the privacy analysis of our paper, suggesting that the size of the published vector could compromise privacy.
We want to clarify that the parameter $k$ is an upper bound on the size of the set $S$ to be encoded. In other words, we treat $k$ as an algorithm parameter, and the algorithm only accepts an input set $S$ with $|S| \leq k$. Therefore, the dimension of the published vector is fixed to $m = (1 + \beta)k$ for all valid input sets $S$ of size at most $k$, and the privacy proof holds in this scenario.
We note that this aligns with prior works ([1], [2]) that studied differentially private releases of $k$-sparse vectors. Here, a $k$-sparse vector is defined as a vector with at most $k$ non-zero entries (see first paragraph of Section 2 in [1] and second paragraph of page 2 in [2] for the definition of $k$-sparse vectors). Furthermore, the notion of DP is only defined over $k$-sparse vectors (see Definition 2.1 in [1] for example). Our work can be seen as studying a more specific instance of a differentially private $k$-sparse vector problem where the non-zero entries are restricted to values of one. In prior works, the space usages of the algorithms were stated in terms of $k$ (an upper bound on the number of non-zero entries), which aligns with how we state the space usages of the algorithms in our work (see Table 1 in [1] and [2]).
Upon reviewing our paper more carefully, we acknowledge that this point was not clearly stated. We commit to addressing this in the next iteration of our paper to ensure clarity.
The reviewer also posed an intriguing question: "Can the two published vectors $x_S$ and $x_T$ (encoding $S$ and $T$, respectively) be merged as $x_{S \cup T}$ to encode the union set $S \cup T$?" In fact, we have considered this exact problem, but unfortunately, we could not find a suitable solution. For now, we leave this as an interesting open research question.
[1]: https://arxiv.org/pdf/2106.10068
[2]: https://arxiv.org/pdf/2112.03449
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
We recommend updating the definitions accordingly and revisiting the paper to modify the descriptions of $S$ wherever it appears. For instance:
1. In the abstract, line 2: "...sets of size $k$ from..."
2. Line 235: "Suppose we are given an input set $S = \{s_1, \dots, s_k\} \subseteq U$ of size $S| = k$." | Summary: The paper presents new differentially private (DP) mechanisms for representing sets of size k from a large universe. It introduces two algorithms: one for epsilon, delta -DP representations and the other for pure epsilo-DP representations, with faster decoding. Both algorithms achieve optimal privacy-utility trade-offs and match new space lower bounds up to small constant.
Strengths: * The work is novel and appears to be correct, though I did not read and check the proofs for the theorems in great detail.
* Theorem statements are clear, and are presented in a transparent manner.
* Experimental results are presented clearly and align with the theoretical work.
Weaknesses: * I think that the technical work is strong, but that the use case of this kind of work remains a bit unclear to me. If the authors could spend more time motivating the need for DP set representations and specific use cases, this would help contextualize the work better. The authors cite that this is useful for applications where users wish to privately disclosure sets of bookmarked websites - can this example be fleshed out more. What would the privacy attacks, and who would the adversary be? This helps structure the work in the context of local vs central DP, etc.
* This also connects with the related works, as I find it somewhat difficult to understand the exact use case of DP set representations, it was hard for me to understand what gaps the related works didn’t fill. Even if the related works sections are well-written.
* Some work in section 3 could be introduced a bit more, for example the work on “Random Band” and “Vandermonde” matrices. I don’t think it’s clear (at least to me)why this design choice was made in the pure versus approximate DP setting. This would help with understanding the gap between the the pure and approximate DP setting as well.
Technical Quality: 3
Clarity: 4
Questions for Authors: * In Algorithm 1, could you explore the failure modes of the algorithm more?
* Theorem 3.1 - F is introduced, but it is not directly introduced as an input to algorithm 2. There’s some organizational work on that would help with comprehension.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer Gmoe for their valuable feedback.
To illustrate the usefulness of our schemes, we will revisit the installed apps use case briefly mentioned in the paper and provide a concrete example. Imagine an analyst at an app software company looking to gather statistics on the percentage of a specific population who have installed apps developed by their company in a differentially private manner. One way to achieve this would be to use the randomized response scheme, which could take the form of the following question on the set of candidate apps: "Does this device have app Y installed?". Then, using the probability parameter $p$, the analyst can infer how many devices have installed these candidate apps.
However, now imagine that the set of candidate apps is very large, say on the order of thousands (perhaps, the analyst also needs to get statistics on competitors’ apps) . This is much more than the number of apps installed on a typical device. In this case, asking and retrieving answers to the questions may incur significant bandwidth overhead. One could use non-DP sets, but that would directly reveal all installed apps to the analyst.
One solution to this problem is to employ our scheme. We configure an upper bound $k$ on the number of apps installed on the device (say, 250), then run our encoding algorithm to release the differentially private representations of the apps from the devices (suppose that we ignore devices with more than 250 apps installed, which are considered outliers). The analyst can invoke the decoding algorithm on the candidate apps to determine the distribution of the installed/uninstalled, and derive accurate statistics on the percentage of devices with these apps installed. While the differentially private app representation includes more information compared to a basic randomized response approach (as it encompasses data on other apps), it can offer a practical balance between privacy and utility in certain scenarios. Note that our scheme still ensures that the analyst cannot exactly identify the set of installed apps on each device.
In response to the reviewer’s suggestion, we will elaborate more on the constructions of “Random band” and “Vandermonde” constructions in the next iteration of our paper. The primary distinction between our pure DP and approximate DP schemes lies in the failure probability associated with the constructed linear systems. When utilizing the Vandermonde matrix construction, the constructed linear system is always solvable, resulting in a pure DP scheme (it never outputs $(\perp, \perp, \perp, S)$). In contrast, using the random band matrix construction introduces a small probability of failing to solve the linear system, leading to a non-zero probability of outputting $(\perp, \perp, \perp, S)$.
While outputting $(\perp, \perp, \perp, S)$ in this scenario may appear to be a blatant privacy violation, it is important to note that the probability of this occurring is small, as it is bounded by $\delta$ in the approximate DP definition. Therefore, it satisfies the approximate DP requirements.
Furthermore, it is crucial to emphasize that outputting $(\perp, \perp, \perp, S)$ solely serves as a convenience in the privacy proof. In practice, alternative measures can be taken to ensure that the set is not disclosed in plaintext. For example, the algorithm can simply retry with new $h$ and $\mathsf{Row}$ until the constructed linear system is solvable.
Finally, we thank the reviewer for suggesting improvements to the presentation of the paper. We promise to incorporate the feedback in the next iteration of our paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Retraction-free optimization over the Stiefel manifold with application to the LoRA fine-tuning | Reject | Summary: The authors propose a retraction-free Riemannian optimization scheme on Stiefel and oblique manifolds to perform parameter-efficient fine-tuning (PEFT) in LoRA style. The proposed approach exploits the theory of landing flows on Stiefel manifolds. Theoretical results demonstrating convergence of this iterative scheme are presented, and complemented by numerical experiments.
Strengths: The proposed method combines the advantages of Riemannian methods while avoiding the computational burden of retracting on the Stiefel manifold. The application in the context of parameter-efficient fine-tuning represents a novelty, and enhances the relevance of the numerical results. The work is well-presented, covering both algorithmic aspects and the experimental section effectively.
Weaknesses: All the presented theory is developed on a function defined on $St(d,r)$, while LoRA fine-tuning gives rise to an objective function $f(B,A) = L(BA)$, where $B \in St(d,r)$ (or $Ob(d,r)$), and $A \in \mathbb{R}^{r \times m}$. This objective function has to be minimized over $St(d,r) \times \mathbb{R}^{r \times m}$, and the advantages of optimizing on a compact manifold are thus lost.
Unfortunately, this makes the presented theoretical results not directly useful for the practical case under consideration.
To give a more precise statement, for example in Lemma 3, the constant $\widehat{L}$ would depend on $A$. By the mean value theorem, we would get a bound of the kind
$$
||grad_B f(A,B_1) - grad_B f(A,B_2)|| \leq C(A) ||B_1 - B_2||
$$
(as noted in equation after line 183 for the Euclidean gradient).
Since the space in which $A$ resides is not compact, one would need at least a uniform control on $||A_k||$ over the iterations to make the theory interesting for LoRA fine-tuning.
It is interesting to note, however, that the authors observe exact numerical convergence to the constraint in all cases.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. **Concerning the comment made in the "weaknesses" section:** In all the proofs, the Lipschitz constants $L_f$ and $D_f$ depend on $A$, making the convergence analysis not directly applicable in the numerical case under investigation. I would appreciate the authors' comment on this, maybe I am missing something here?
2. I believe however that the observed convergence behavior is not a coincidence, even if the setting is different.
The issue can be addressed by assuming $L(X) \to +\infty$ as $||X|| \to +\infty$. With enough regularity assumed, this ensures that the sublevel sets of $L$ are compact. Given an initial condition $X_0 = B_0A_0$ in the compact set $L^{-1}([-\infty,M])$, the flow $\dot A = -\nabla_A L(BA)$ decreases $L$, thereby remaining within the initial compact set $L^{-1}([-\infty,M])$. This would provide an effective bound for the Lipschitz constant, without compactness of the original domain.
3. I don't see however a way to control $||A||$ uniformly in time for "classification-like" problems, where the global interpolating minima may be off at infinity.
4. Line 93: the definition of $f$ has a typo. Also, "differentiable" is more common to use.
5. Line 131: Even if it's clear what you mean, $\bar{U}_{St(d,r)}(\frac18)$ was defined nowhere in the main manuscript.
6. The choice of notation is a bit unfortunate in some points. I would personally substitute the loss function $L$ with $\mathcal L$, just to avoid confusion with the Lipschitz constants.
I am inclined to raise my score, provided the authors clarify my doubts and address the mentioned issues (or at least discuss these points in the manuscript).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: As noted in the "weaknesses" section, I believe there is a delicate point that is not addressed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our work and providing detailed comments. The following are our responses.
### Weaknesses:
All the presented theory is developed on a function defined on $\text{St}(d, r)$, while LoRA fine-tuning gives rise to an objective function $f(B, A) = L(BA)$, where $B \in \text{St}(d, r)$ (or $\text{Ob}(d, r)$), and $A \in \mathbb{R}^{r \times m}$. This objective function has to be minimized over $\text{St}(d, r) \times \mathbb{R}^{r \times m}$, and the advantages of optimizing on a compact manifold are thus lost.
Unfortunately, this makes the presented theoretical results not directly useful for the practical case under consideration. To give a more precise statement, for example in Lemma 3, the constant $\hat{L}$ would depend on $A$. By the mean value theorem, we would get a bound of the kind$\lVert \text{grad}\_B f(A, B_1) - \text{grad}\_B f(A, B\_2) \rVert \leq C(A) \lVert B\_1 - B\_2 \rVert$
(as noted in equation after line 183 for the Euclidean gradient).
Since the space in which $A$ resides is not compact, one would need at least a uniform control on $\lVert A_k \rVert$ over the iterations to make the theory interesting for LoRA fine-tuning. It is interesting to note, however, that the authors observe exact numerical convergence to the constraint in all cases.
**Reply:** **The variability of Lipschitz constants over an unbounded domain is not unique to our method.** Consider a simple two-layer neural network with weight matrices $U$ and $V$. For a loss function $\mathcal{L}(U, V)$, it is natural that the Lipschitz constant of $\nabla_V \mathcal{L}$ depends on the norm of $U$, where the domain of $U$ spans the entire Euclidean space, thus being unbounded.
This phenomenon is common in various optimization scenarios and can be managed by introducing regularity assumptions on the loss function, such as cocoercivity and differentiability. Specifically, if the iterates of an algorithm adhere to a certain descent property on their loss functions, they will remain within a bounded level set from the cocoercivity of the loss function. We also note that there is a line of research that analyzes the convergence of algorithms by employing local Lipschitz smoothness instead of global Lipschitz smoothness.
### Questions:
1. **Concerning the comment made in the "weaknesses" section:** In all the proofs, the Lipschitz constants $L_f$ and $D_f$ depend on $A$, making the convergence analysis not directly applicable in the numerical case under investigation. I would appreciate the authors' comment on this, maybe I am missing something here?
**Reply:** Gradient Lipschitz continuity is a widely used assumption in the analysis of optimization algorithms. Though this condition may seem restrictive for unbounded domains, particularly in unconstrained optimization, it can be relaxed by incorporating cocoercivity (which implies bounded level sets) and ensuring differentiability of the objective function. The key is how to ensure the iterates lie in a level set of $L$. This is easy for the deterministic setting, where the line search is an effective tool. However, this would be difficult in the stochastic setting where line search is not applicable. **It should be noted that such issue is not specific to our algorithm, but all stochastic algorithms.**
2. I believe however that the observed convergence behavior is not a coincidence, even if the setting is different. The issue can be addressed by assuming $L(X) \to +\infty$ as $\lVert X\rVert \to +\infty$. With enough regularity assumed, this ensures that the sublevel sets of $L$ are compact. Given an initial condition $X_0 = B_0 A_0$ in the compact set $L^{-1}((-\infty, M])$, the flow $\dot{A} = -\nabla_A L(BA)$ decreases $L$, thereby remaining within the initial compact set $L^{-1}((-\infty, M])$. This would provide an effective bound for the Lipschitz constant, without compactness of the original domain.
**Reply:** In our response to the weakness, we acknowledge that the variability of Lipschitz constants over an unbounded domain is a common issue in machine learning tasks, not just specific to our approach. To address this, one effective strategy is to assume that the loss function is proper, coercive, and smooth. See also our responses to the weakness and Q1.
3. I don't see however a way to control $\lVert A\rVert$ uniformly in time for "classification-like" problems, where the global interpolating minima may be off at infinity.
**Reply:** As in our response to previous questions, we may need to control the step size and the estimation accuracy of the gradient to ensure the boundedness of $\lVert A\rVert$ by maintaining iterations within a bounded level set. Could you please provide more details on your concerns regarding the potential blow-up of $\lVert A\rVert$?
4. Line 93: the definition of $f$ has a typo. Also, "differentiable" is more common to use.
**Reply:** Revised.
5. Line 131: Even if it's clear what you mean, $\bar{U}_{\text{St}(d, r)}\left(\frac{1}{8}\right)$ was defined nowhere in the main manuscript.
**Reply:** It is defined as $\bar{U}_{\rm St}(1/8) = \\{ Y \in \mathbb{R}^{n \times p} \mid \lVert Y - X \rVert_F \leq \frac{1}{8} \\}$. We have added this into the notation part.
6. The choice of notation is a bit unfortunate in some points. I would personally substitute the loss function $L$ with $\mathcal{L}$, just to avoid confusion with the Lipschitz constants.
**Reply:** Thank you for the suggestion. We will revise our text accordingly to enhance readability.
---
Rebuttal Comment 1.1:
Comment: I wish, first of all, to thank the authors for their rebuttal.
Regarding their answers:
**1, 2**: I agree that this issue is not specific to your proposed algorithm, it was not my intention to make it sound like a peculiarity of your work.
However, my impression on the manuscript is that there is a "distance" between the theory and the test cases under consideration. Of course a minimal set of assumptions is always the best, but I was expecting at least a separate result that would, in some way, also cover the cases presented in the experimental section, even if this requires making additional assumptions.
**3**: As mentioned before, my suggestion was to try to integrate your theoretical results more into the experimental setting. For example, in classification tasks (often of interest for your application), my concern was that the loss is of exponential type, and it is known that, for instance, on separable data, $||BA|| \rightarrow +\infty$ [1].
This is why I was skeptical about convergence guarantees in some cases of interest for your work (some of the GLUE Debertav3 tasks fit this setting for example).
[1] M.S. Nacson et al., "Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate", AISTATS 2019.
In any case, I would like to increase my score, provided that you add a small paragraph discussing additional necessary assumptions for a broader class of problems (on the line of what you did in your answer).
---
Reply to Comment 1.1.1:
Comment: 1, 2: I agree that this issue is not specific to your proposed algorithm; it was not my intention to make it sound like a peculiarity of your work. However, my impression of the manuscript is that there is a "distance" between the theory and the test cases under consideration. Of course, a minimal set of assumptions is always the best, but I was expecting at least a separate result that would, in some way, also cover the cases presented in the experimental section, even if this requires making additional assumptions.
**Reply:** The global Lipschitz continuity may not hold in certain instances. To enhance the adaptability of our convergence results for the applications detailed in the experimental section, we will add a paragraph to discuss more practical alternatives, such as coercivity, differentiability, and local gradient Lipschitz continuity [SIOPT 2023], aiming to clarify how these properties can be met in terms of the application presented in the experimental section.
[SIOPT 2023] Xiaoxi Jia, Christian Kanzow, and Patrick Mehlitz. "Convergence Analysis of the Proximal Gradient Method in the Presence of the Kurdyka–Łojasiewicz Property Without Global Lipschitz Assumptions." SIAM Journal on Optimization 33, no. 4 (2023): 3038-3056.
3: As mentioned before, my suggestion was to try to integrate your theoretical results more into the experimental setting. For example, in classification tasks (often of interest for your application), my concern was that the loss is of exponential type, and it is known that, for instance, on separable data, $||BA|| \rightarrow +\infty$ [1]. This is why I was skeptical about convergence guarantees in some cases of interest for your work (some of the GLUE Debertav3 tasks fit this setting for example).
[1] M.S. Nacson et al., "Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate", AISTATS 2019.
**Reply:** Thanks for your further clarification.
In [1], the authors comment that replacing the global Lipschitz smoothness requirement with local Lipschitz smoothness and an appropriately small step size could be sufficient to keep iterates within a region where the Lipschitz constants remain uniformly bounded for the exponential-type loss function. Specifically, the footnote in [JMLR 2018, Assumption 3], a companion paper to [1], states: "The exponential loss does not possess a global $\beta$ smoothness parameter. However, initialization with $\eta < \frac{1}{\mathcal{L}(w_0)}$ ensures that the gradient descent iterates maintain bounded local smoothness."
Furthermore, there is a growing body of research within the optimization community that focuses on algorithm convergence using local rather than global Lipschitz smoothness. This approach often depends on additional assumptions to ensure favorable properties of the local Lipschitz constant during iterations, such as Assumption 3.2 in [JOTA 2022], where the objective function is both bounded from below and lower-bounded by an affine function. For recent advancements, please refer to [SIOPT 2023].
[JOTA 2022] C. Kanzow and P. Mehlitz. Convergence properties of monotone and nonmonotone proximal gradient methods revisited. Journal of Optimization Theory and Applications, 195(2):624–646, 2022.
[JMLR 2018] Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S. Srebro, N. . The implicit bias of gradient descent on separable data. Journal of Machine Learning Research, 19(70), 1-57, 2018.
In any case, I would like to increase my score, provided that you add a small paragraph discussing additional necessary assumptions for a broader class of problems (on the line of what you did in your answer).
**Reply:** Thank you for your insightful comment. We will definitely add a paragraph to discuss the potential lack of global Lipschitz smoothness and how to establish more practical assumptions for a broader class of problems, particularly in terms of the considered numerical applications. | Summary: Retraction-free optimization algorithms on the Steifel manifold have been proposed in [1,18,19,41] etc. The motivation is that if the cost of the objective function/gradient evaluation is significantly larger than the evaluation of a retraction, then the retraction-free optimization algorithms show their advantages and efficiency. In particular, for the landing algorithm proposed in [1], the choice of the parameter in the penalty is important and may not be easy to choose. This paper gives an analysis that shows if the parameter is 1/3, the initial point is sufficiently close to the Stiefel manifold, and the step size is chosen sufficiently small, then the algorithm converges linearly to a stationary point. This result gives a concrete value of the parameter. Such a result is further merged into optimization on low-rank matrices and Manifold-LoRA is proposed. Numerical experiments show that the proposed method outperforms the baseline algorithms.
Strengths: This paper gives a concrete value of $\mu$ and a gap between x_0 and \bar{x}_0 such that the algorithm converges under reasonable assumptions. Numerical experiments show that the proposed method is more effective than the existing approach.
Weaknesses: (1) Though the value of mu and an upper bound of \|x_0 - \bar{x}_0\| are given concretely, the choice of step size is unknown. Theoretically, the step size needs to be sufficiently small (See Theorem 1). Any theoretical suggestion for the choice of the step size?
(2) Numerical experiments report results of the comparisons. However, the definition of ``result'' is not given. Is the result computational time or classification accuracy or a notion of correctness or something else?
(3) Problem (12) does not remove all the ambiguity. Note that if B \in St(d, r), then B A = B O O^T A = \tilde{B} \tilde{A}, where O is an orthonormal matrix and \tilde{B} = B O is still in St(d, r). Likewise for B \in Ob(d, r). Is it possible to completely remove the ambiguity by considering the quotient manifold?
(4) Why is the numerical performance of Manifold-LoRA for using Stiefel and Oblique manifold in (12) different? The optimization problem is equivalent in the sense that the local minimizer/stationary point does not change.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions are given in the weaknesses section. More questions are given below.
(1) L117, X \in St(d, r) or X \in \mathbb{R}^{d \times r}
(2) What is the percentage of the computational cost of the retraction in the overall computations?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations of the paper are discussed in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. The following are our responses.
### Weaknesses:
1. Though the value of mu and an upper bound of $|x_0 - \bar{x}_0|$ are given concretely, the choice of step size is unknown. Theoretically, the step size needs to be sufficiently small (See Theorem 1). Any theoretical suggestion for the choice of the step size?
**Reply:** **It is not specific to our algorithm for unknown step size**, including retraction-based algorithms, where the step size relies on unknown Lipschitz constant of the Riemannian gradient for problem (1).
**Theorem 1 demonstrates that a suitable constant step size results in convergence.** In contrast, the landing algorithm in [1] converges only to a neighborhood whose size is dependent on the step size. According to Theorem 1, an exact step size can be calculated if $L$, $\hat{L}$, and $\hat{D}_f$ are known. If these parameters are unknown, it is necessary to estimate their upper bounds or to utilize numerical strategies, such as a grid search. The role of this theory is to ensure the existence of a constant step size that allows the algorithm to converge. In our numerical experiments, we employ the grid search.
2. Numerical experiments report results of the comparisons. However, the definition of ``result`` is not given. Is the result computational time or classification accuracy or a notion of correctness or something else?
**Reply:** We report the overall (matched and mismatched) accuracy for MNLI, Matthew’s correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks. Higher is better for all metrics. Additionally, since the additional computational cost per step induced by our method is negligible, the number of epochs needed can be roughly equated to the time required.
3. Problem (12) does not remove all the ambiguity. Note that if $B \in St(d, r),$ then $B A = B O O^T A = \tilde{B}\tilde{A}$ where $O$ is an orthonormal matrix and $\tilde{B} = BO$ is still in $St(d, r)$. Likewise for $B \in Ob(d, r)$. Is it possible to completely remove the ambiguity by considering the quotient manifold?
**Reply:** The distinction between the Stiefel and Grassmann manifolds is not significant, as observed in applications from low-rank optimization (e.g., matrix completion and matrix decomposition) and principal component analysis. The primary reason for choosing the Stiefel manifold over the Grassmann manifold is the ease of analytical treatment. Numerically, the differences between these two approaches are negligible, especially since the projection operators on their respective tangent spaces are identical, a common scenario in low-rank optimization and principal component analysis.
4. Why is the numerical performance of Manifold-LoRA for using the Stiefel manifold and the oblique manifold in (12) different? The optimization problem is equivalent in the sense that the local minimizer/stationary point does not change.
**Reply:**
Setting the constraint set of $B$ to either the Stiefel manifold or the oblique manifold in equation (12) leads to two distinct optimization problems, characterized by unique optimality conditions and stationary points. For the Stiefel manifold, the stationary points are defined by:
$$
grad f(X) - X \text{sym}(X^T\nabla f(X)) + X(X^TX - I) = 0.
$$
For the Oblique manifold, the condition is:
$$
grad f(X) - X \text{diag}(\text{diag}(X^T\nabla f(X))) + X \text{diag}(X^TX - I) = 0.
$$
Therefore, the sets of stationary points for the two manifolds are not equivalent. The differing geometrical constraints imposed by each manifold are expected to influence the numerical performance of the algorithm.
### Questions:
The questions are given in the weaknesses section. More questions are given below.
1. L117, $X \in {\rm St}(d, r) ~ or ~ X \in \mathbb{R}^{d \times r}$
**Reply:** It should be $X \in \mathbb{R}^{d \times r}$. Revised it.
2. What is the percentage of the computational cost of the retraction in the overall computations?
**Reply:** Considering a LoRA layer represented by:
$$
H = (W + BA)S,
$$
where $H \in \mathbb{R}^{n \times k}$, $S \in \mathbb{R}^{n \times k}$, $W \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times p}$, and $A \in \mathbb{R}^{p \times n}$, we define the gradient with respect to $H$ as $\mathcal{D}_H$. The gradients with respect to $B$ and $A$ are computed as $\mathcal{D}_H S^T A^T$ and $B^T \mathcal{D}_H S^T$ respectively. Thus, the computational cost for these gradients amounts to $\mathcal{O}(n^2k + n^2p) + \mathcal{O}(2pnk)$. Our method is retraction-free, adding only the computational tasks of projection and calculating the penalty gradient, both of which are $\mathcal{O}(np^2)$. This additional cost is negligible when $p$ is small.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the replay.
1) Though many algorithms do not know the constant step size, a commonly-used practical approach is using a line search algorithm such as backtracking with an appropriate initial step size to guarantee convergence. Is it possible to give a computable initial step size with a backtracking algorithm such that the result of Thm 1 holds. If it is true, then the proposed algorithm is more practical.
4) I agree if only the B-subproblem is considered, then the stationary point of the subproblem is different for different constraints on B. However, Problem (12) considers L(BA) with an arbitrary A. It follows that the stationary points of L should be the same. That is what I am concerned.
---
Reply to Comment 1.1.1:
Comment: 1. Though many algorithms do not know the constant step size, a commonly-used practical approach is using a line search algorithm such as backtracking with an appropriate initial step size to guarantee convergence. Is it possible to give a computable initial step size with a backtracking algorithm such that the result of Thm 1 holds. If it is true, then the proposed algorithm is more practical.
**Reply:** Note that Theorem 1 differs from the standard line search-based convergence in two key aspects:
- The decrease in each step is applied not directly to the iterates, but to their projected versions.
- Each step may not always decrease the objective function value, but the accumulated effect over several steps will reduce the objective value, assuming the step size is small enough.
While this type of convergence analysis is common in optimization areas, such as, distributed optimization and min-max optimization, designing a line search method based on this approach is not straightforward. Exact convergence with a constant step size (even if the step size is unknown) is generally considered crucial, as opposed to convergence to a neighborhood or within a certain horizon.
However, it would be interesting to explore the development of a new Lyapunov function to establish a one-step decrease directly on the iterates, which could lead to a line search method suitable for this setting in future work.
2. I agree if only the B-subproblem is considered, then the stationary point of the subproblem is different for different constraints on B. However, Problem (12) considers $L(BA)$ with an arbitrary $A$. It follows that the stationary points of L should be the same. That is what I am concerned.
**Reply:** Thank you for your further clarification. By treating $BA$ as a single entity, the two problems become equivalent. Alternatively, different manifold constraints on $B$ correspond to different parameterizations of the low-rank matrix $W$.
Although adding different constraints on $B$ (or $A$ ) may not change the stationary point to which the algorithm converges, it accelerates the convergence process by leveraging the manifold geometry, enabling more efficient movement along the manifold. This is why manifold optimization methods can be more powerful than traditional constrained optimization approaches. A similar acceleration is observed in low-rank matrix completion problems, where methods with a Grassmann manifold constraint converge faster, even though they ultimately reach the same solution with zero recovery error. Please refer to [MPC 2012, NeurIPS 2011, IEEE TIT 2012, IEEE TIT 2009] for the details.
- [MPC 2012] Wen Zaiwen, Yin Wotao, Zhang Yin; Solving A Low-Rank Factorization Model for Matrix Completion by A Nonlinear Successive Over-Relaxation Algorithm; Mathematical Programming Computation; 4 (2012), 333-361.
- [NeurIPS 2011] Boumal, Nicolas, and Pierre-antoine Absil. "RTRMC: A Riemannian trust-region method for low-rank matrix completion." Advances in neural information processing systems 24 (2011).
- [IEEE TIT 2012] Dai W, Kerman E, Milenkovic O. A geometric approach to low-rank matrix completion[J]. IEEE Transactions on Information Theory, 2012, 58(1): 237-247.
- [IEEE TIT 2009] W. Dai and O. Milenkovic, "Subspace Pursuit for Compressive Sensing Signal Reconstruction," in IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230-2249, May 2009, doi: 10.1109/TIT.2009.2016006. | Summary: This paper considers solving optimization problems with constraints that have orthonormal columns (i.e. the matrix belongs to the Stiefel manifold). The leading method for solving such problems is the Riemannian optimization. However, Riemannian optimization requires a costly retraction operation. The authors propose to circumvent this by introducing an additional penalty terms that stirs the optimization towards respect the manifold constraints. Indeed, the authors show that with correcting setting of parameters, the optimum will be on the manifold, and the algorithm will find it. The authors advocate that an additional advantage of their algorithm is that we know how to set the parameters for the penalty term, and so their algorithm is parameter-free.
A significant part of the paper is devoted towards motivating the study in terms of low-rank adaption in LLMs, and showing experiments in that vain.
Strengths: - A very elegant method for retraction free optimization on the Stiefel manifold.
- Detailed theoretical analysis showing the algorithm converges to a critical point on the manifold.
- The theoretical analysis gives explicit guidance on how to set the penalty parameter.
- The LLM applicaiton and experiments appear impressive. However, I am not an expert on this subject, so it is hard for me to asses how significant the results and evaluations are.
Weaknesses: (The following were addressed by the authors in the rebuttal)
Major issues (affecting the recommendation):
1) Novelty: Citation [1] considers optimization with constraints on the orthogonal group (i.e. St(n,n)). It seems that the core idea on how to implement retraction free optimization already appears there. The authors mention this in "related work", and say that [1[ does not discuss the r<n case. Inspection of [1] reveals that this is not the entire story. In Sec 3.5 of [1] the case of r<n is discussed briefly, and it is said the results can be extended for that case. However, the authors of [1] are skeptical of the value of this, as they mention that there are fast retraction methods (i.e. Cayley) for the Stiefel manifold.
- Setting the penalty parameter: the authors advocate that they give an explicit value for the penalty parameter. And indeed Theorem 1 sets the parameter \mu to 1/3. However, I do not think the situation is so simple. The theorems have the additional assumption that the iterates starts close to the manifold (1/8). This is, of course, easy to achieve - just start on the manifold itself. However, for the proof to work shouldn't all iterates stay inside this bound? This necessitates for the other parameter (step size) to be small enough. And indeed, the theorem requires that the step size be small enough, and does not specify how small. Without looking in detail in the proofs, my guess is that changing the penalty step size (\mu) affects how close you need to be to the manifold (the value 1/8), which affects how small the step size need to be. In other words, the authors load all the complexity of setting the parameters onto the step size of the main objective. Saying there is a upper bound on its value , without specifying what that value is. You cannot call this parameter free.
Another point is that the values of the parameter probably affect convergence rate, though the authors do not discuss this at all.
Minor comments (do not affect the recommendation):
- Line 117: If X is on the manifold, shouldn't \bar{X}, which is the projection of X on the manifold, be exactly X?
- Line 119: "satisfies the restricted secant"
- Line 131: What is U_St (1/8)? Not defined. Ditto line 136.
- Line 133: If the condition of twice diff is assumed, then state it earlier.
- Eq (10): \hat{D}_f is not defined.
- Table 1: Why metrics are changing between columns?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please address more carefully why you think you are novel with respect to [1]?
- Why are fast retraction methods not sufficient?
- How setting the parameters affect convergence rate?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Nothing to add.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reading our manuscript and appreciating our work. The following are our responses.
#### Novelty:
1. Citation [1] considers optimization with constraints on the orthogonal group (i.e., St(n,n)). It seems that the core idea on how to implement retraction free optimization already appears there. The authors mention this in "related work", and say that [1] does not discuss the $r<n$ case. Inspection of [1] reveals that this is not the entire story. In Sec 3.5 of [1] the case of $r<n$ is discussed briefly, and it is said the results can be extended for that case.
**Reply**: Our method differs with [1] from the following aspects:
- The construction of our landing algorithm is more straightforward. It involves the summation of the Riemannian gradient of the loss and the gradient of the penalty function of the Stiefel manifold. While the landing field in [1] shares a similar structure, its first term (as seen in Eq. (4) of [1]) is less interpretable. Our method may provide a clearer approach to design landing algorithms for general manifolds.
- We explore the strong convexity-like property, specifically the restricted secant inequality, of the penalty problem and provide an explicit value for the penalty parameter, which is not given in [1]. Consequently, our landing algorithm only requires tuning the step size, whereas [1] necessitates careful selection of both the step size and the penalty parameter.
- Our theorem demonstrates the exact convergence when using a constant step size. In contrast, the landing algorithm in [1] only converges to a neighborhood whose size depends on the step size. **This issue is acknowledged in their paper, particularly in the paragraph following Proposition 10.** Developing a theory that ensures exact convergence with a constant step size is crucial for making these methods competitive with retraction-based approaches.
Although the landing algorithm in [1] can be extended to the case where $r<n$ in Section 3.5, our analysis introduces a novel penalty parameter-free approach. Additionally, our theoretical results on exact convergence with a constant step size are both new and significantly different from those presented in [1]. We will revise our manuscript to make the comparison clearly.
2. However, the authors of [1] are skeptical of the value of this, as they mention that there are fast retraction methods (i.e., Cayley) for the Stiefel manifold.
**Reply**: Both [1] and [MP 2015] indicate that the computational cost of the Cayley transformation is $4nr^2 + \frac{40}{3}r^3$, which is more than twice the cost of our method at $2nr^2$ for any $r$. Additionally, performing a retraction on the Stiefel manifold involves a specific orthogonalization procedure that is challenging to scale and parallelize, such as the matrix inversion required in the Cayley transformation. In contrast, our landing algorithm can be executed efficiently using BLAS3 operations.
[MP 2015] Bo Jiang, and Yu-Hong Dai. A framework of constraint preserving update schemes for optimization on Stiefel manifold. Mathematical Programming 153, no. 2 (2015): 535-575.
3. Setting the penalty parameter: the authors advocate that they give an explicit value for the penalty parameter. And indeed Theorem 1 sets the parameter $\mu$ to 1/3. However, I do not think the situation is so simple. The theorems have the additional assumption that the iterates starts close to the manifold (1/8). This is, of course, easy to achieve - just start on the manifold itself. However, for the proof to work shouldn't all iterates stay inside this bound? This necessitates for the other parameter (step size) to be small enough. And indeed, the theorem requires that the step size be small enough, and does not specify how small.
**Reply**: Our results show that setting $\mu = \frac{1}{3}$and $0 < \alpha \leq \frac{1}{2c_1}$ with $c_1$ specified in Line 418 will yield convergence. To ensure that iterates remain within the designed neighborhood, the step size must not exceed $\frac{1}{24 \hat{D}_f}$, based on the derivations from Equation (9) and the norm of $\lVert\nabla f(X)\rVert$ over $\bar{U}\_{{\rm St}}(1/8)$. That is, if $\alpha \le \frac{1}{24 \hat{D}\_f}$, all iterates $X_k$ will stay within $\bar{U}\_{{\rm St}}(1/8)$ for all $k$. This step size bound is implied by the condition $\alpha \le \frac{1}{2c_1}$. We will revise our manuscript to clarify it.
**For responses to the remaining questions, please refer to the general response.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the answers. I will raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for appreciating our work! | Summary: This paper proposes a new algorithm, Manifold-LoRA, which incorporates the Stiefel manifold constraint to accelerate low-rank adaptation (LoRA) in fine-tuning LLMs. It also provides theoretical and experimental validation for the retraction-free and penalty parameter-free optimization methods.
Strengths: This paper is highly technical and indicates a strong mathematical background in optimization and manifold theory. Manifold-LoRA leverages manifold geometry to reduce redundancy in LoRA fine-tuning, leading to enhanced performance and faster convergence. Furthermore, it has robust experimental validation across various datasets.
Weaknesses: W1: Some experimental results are unclear and not well defined.
W2: Lack of the discussion about limitations of your method.
W3: Some findings of the experiments are hard to understand.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1: In Table 1, could you please explain the meaning of m/mm? As I know, it's not an estimation of the performance of NLP models. You also didn't define Acc and Mcc. Also, why not use Acc to evaluate the performance of all the datasets?
Q2: The finding "It can be seen that our method 211 is consistently superior to other baselines." doesn't match the results in Table 1. What makes you conclude this?
Q3: Some findings are not clearly represented. For example, "We conclude that the proposed 228 Manifold-LoRA method achieves a 2x speed-up in training epochs compared to AdamW, while 229 simultaneously improving model performance".
Q4: The two datasets in the QA task are wildly used, but also weak. Have you tried other datasets? For example, the HotpotQA dataset.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitation is not enough and is meaningless.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. The following are our responses.
1. **W1: Some experimental results are unclear and not well defined.**
**Reply**: We have revised the descriptions of numerical experiments accordingly. Specifically, we report the overall (matched and mismatched) accuracy for MNLI, Matthews correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks. And for Table 2, we report Exact Match(EM) and F1 scores. Higher is better for all metrics.
2. **W2: Lack of the discussion about limitations of your method.**
**Reply**: In our conclusion, we address the theoretical limitations of our paper. While the constraints of the Stiefel and Oblique manifolds contribute to advancements in fine-tuning large language models, this method is difficult to extend to other areas such as pre-training and alignment. Therefore, it is essential to generalize our approach to more comprehensive manifolds, such as the Grassmann manifold. In addition, in the standard setting, LoRA uses a fixed rank for all layers, which has been shown to yield sub-optimal performance, as demonstrated in paper [46]. They address this issue by modifying the model architecture. In our conclusion, we also acknowledge that we did not use the coefficient matrix $A$ to dynamically select the rank $r$, which may have impacted our fine-tuning performance.
3. **W3: Some findings of the experiments are hard to understand.**
**Reply**: As shown in Table 1, our method outperforms the other baselines on average scores. Specifically, **Sphere (r=16)** achieves better performance than other baselines on 7 out of 9 tasks, with the exceptions of MNLI-MM and QQP. In Tables 2 and 3, our method performs consistently better than the baselines. We will make the statement about the experiments more precise.
**Questions**:
1. **Q1**: In Table 1, could you please explain the meaning of m/mm? As I know, it's not an estimation of the performance of NLP models. You also didn't define Acc and Mcc. Also, why not use Acc to evaluate the performance of all the datasets?
**Reply**: MNLI-M and MNLI-MM represent two similar datasets for which the evaluation metric is accuracy. The MNLI dataset is a collection of sentence pairs annotated for textual entailment. MNLI-M is the matched split of the MNLI dataset, while MNLI-MM is the mismatched split.
Mcc stands for Matthews Correlation Coefficient, used for evaluating CoLA. The Pearson correlation is used for STS-B, and accuracy is the metric for other tasks. These metrics are widely used and are considered standard in the field, such as papers [22], [15], and [25].
2. **Q2**: The finding "It can be seen that our method 211 is consistently superior to other baselines." doesn't match the results in Table 1. What makes you conclude this?
**Reply**: The best results are in bold in Table 1, our method outperforms other methods in most datasets except in MNLI-MM and QQP compared to full fine-tuning. Also in the setting of Sphere(r=16), it achieves the best average scores among all methods, followed by Stiefel(r=8).
3. **Q3**: Some findings are not clearly represented. For example, "We conclude that the proposed 228 Manifold-LoRA method achieves a 2x speed-up in training epochs compared to AdamW, while 229 simultaneously improving model performance".
**Reply**: In Figure 2(a), it is easy to see that our method requires only half an epoch to reach a training loss of 1, whereas LoRA requires 2 epochs to achieve the same result. Additionally, since the additional computational cost per step induced by our method is negligible, the number of epochs needed can be roughly equated to the time required. In Figure 2(b) and Figure 2(c), it is obvious that our method achieves a higher EM (Exact Match) and F1 score.
4. **Q4**: The two datasets in the QA task are wildly used, but also weak. Have you tried other datasets? For example, the HotpotQA dataset.
**Reply**: Please refer to the following table for the results of HotpotQA dataset. Our method, Sphere $({r=16})$, performs the best.
| **Methods** | **Params** | **HotpotQA (EM/F1)** |
|-------------------|------------|----------------------|
| Full FT | 184M | 63.3 / 76.7 |
| Adapter\({r=16}\) | 0.61M | 60.2 / 74.2 |
| Bitfit | 0.07M | 57.2 / 71.6 |
| LoRA\({r=8}\) | 1.33M | 61.3 / 75.4 |
| LoRA\({r=16}\) | 2.65M | 61.5 / 75.4 |
| Sphere\({r=8}\) | 1.33M | 62.4 / 76.3 |
| Sphere\({r=16}\) | 2.65M | **63.4 / 76.9** |
| Stiefel\({r=8}\) | 1.33M | 61.6 / 75.4 |
| Stiefel\({r=16}\) | 2.65M | 61.4 / 75.4 |
**Table**: Results with DeBERTaV3-base on HotpotQA. We report EM/F1. The best results in each setting are shown in bold.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply.
1. For Table 1, the design should clearly differentiate between dataset pairs and evaluators. For example, `m/mm` represents two similar datasets and should not be listed on the same line as evaluation metrics like Acc, Mcc, etc. Although Pearson Correlation is commonly used for the STS-B dataset, it is more appropriate to use Accuracy for the CoLA dataset, along with Mcc. Given that Accuracy is used for most datasets, it would be reasonable to present the performance of all methods for CoLA using Acc.
2. In your conclusion, instead of stating, "It can be seen that our method is consistently superior to other baselines," it would be more accurate to acknowledge that while your method outperforms other methods on most datasets, it does not do so on all datasets. Therefore, a more rigorous statement would be to highlight that your method demonstrates superior performance in the majority of cases.
---
Rebuttal 2:
Comment: Thank you for your valuable feedback.
For the Table 1, we will revise it to make it more clear.
We use MCC for CoLA for two main reasons:
1. MCC is a widely recognized metric in the context of LoRA-fine-tuning. **It is considered a standard in LoRA fine-tuning due to the fact that the original LoRA paper[22](in their section 5.1 Table 2) reported MCC scores for the CoLA experiments in the GLUE benchmark**. This is the primary reason.
2. MCC is advantageous in handling imbalanced datasets because it evaluates all elements of the confusion matrix—true positives, true negatives, false positives, and false negatives—providing an unbiased assessment that accuracy (ACC) may not offer in skewed data situations. Moreover, MCC's scale from -1 to +1 provides detailed information about model performance, distinguishing between perfect predictions, random guesses, and complete inaccuracies, unlike ACC's more limited 0 to 1 range.
We are, of course, willing to accommodate your suggestion by including the ACC scores for the CoLA dataset as well.
We will also refine the statement "Our method is consistently superior to other baselines" to be more rigorous based on the results as you suggested.
---
Rebuttal Comment 2.1:
Comment: The primary concerns you raised focus on the experimental section of our paper. We have clarified the metrics used in our experiments, which align with the standards of the filed. In addition, We have conducted experiments on the dataset as you suggested, and the results suggest the effectiveness of our method. Given that we addressed your primary concerns raised in the review, we would kindly ask you to adjust your review score while taking the discussion into account. | Rebuttal 1:
Rebuttal: **Continued rebuttal for Reviewer ohUo**
4. Without looking in detail in the proofs, my guess is that changing the penalty step size $\mu$ affects how close you need to be to the manifold (the value 1/8), which affects how small the step size need to be. In other words, the authors load all the complexity of setting the parameters onto the step size of the main objective. Saying there is an upper bound on its value, without specifying what that value is. You cannot call this parameter free.
**Reply**: The step size bound $\alpha \in (0, \frac{1}{2c_1}]$ explicitly depends on the smoothness parameter and the bound of the gradient norm of $f$, with $c_1$ defined in Line 418. It also implicitly depends on the penalty parameter and the neighborhood size determined by Lemma 4, which includes the condition $\alpha \le \frac{1}{24 \hat{D}_f}$ to ensure that iterates remain within the specified neighborhood. Generally, a smaller neighborhood size can lead to a faster convergence rate by Lemma 4 and allows a larger step size $\alpha$. However, $\alpha$ remains constrained by the landscape of the loss function, which may limit the allowable step size despite other conditions.
**We should note that there are two parameters in landing algorithms: the step size and the penalty parameter.** Specifically, the landing fields in both [1] and our manuscript take the form: $ V(X) = \alpha \nabla f(X) + \mu \varphi(X) $. In the previous paper [1], both parameters were unknown, requiring a sufficiently large $\mu$ and a varying $\alpha$ (with respect to the iteration number) to achieve convergence. Instead, by leveraging the strong convexity-like property of the penalty problem, we demonstrate convergence using $\mu = \frac{1}{3}$ and a constant step size $\alpha$. In this sense, we describe our method as penalty parameter-free.
5. Another point is that the values of the parameter probably affect the convergence rate, though the authors do not discuss this at all.
**Reply**: The proof of Theorem 1 establishes that $\sum_{k=0}^K \alpha \lVert grad f(X_k)\rVert^2 < \infty$. This implies $\min_{k =0, \ldots, K} \lVert grad f(X_k)\rVert^2 \leq \mathcal{O}(\frac{1}{\alpha K})$. With this bound, a larger step size leads to faster convergence. This complexity bound aligns with the best-known results for retraction-based methods. We will revise our manuscript to state this clearly.
### Minor comments (do not affect the recommendation):
- Line 117: If X is on the manifold, shouldn't $\bar{X}$, which is the projection of X on the manifold, be exactly X?
**Reply**: $X$ should be any matrix of $\mathbb{R}^{n\times p}$ and $\bar{X}$ is its projection on ${\rm St}(n,p)$. We have revised it accordingly.
- Line 119: "satisfies the restricted secant"
**Reply**: Revised.
- Line 131: What is $U_{\rm St} (1/8)$ ? Not defined. Ditto line 136.
**Reply**: In line 131, $U\_{\rm St}(1/8) = \\{ Y \in \mathbb{R}^{n \times p} \mid \lVert Y - X \rVert_F < \frac{1}{8} \\}$.
In line 136, it should be $U_{\rm St}(1/8)$. Sorry for the confusion. We have added this in the notation part.
- Line 133: If the condition of twice diff is assumed, then state it earlier.
**Reply**: Revised.
- Eq (10): $\hat{D}\_f$ is not defined.
**Reply**: In Line 483, we define $\hat{D}\_f$ as $
\hat{D}\_f := \max_{X \in \bar{U}_{\text{St}(d,r)}\left(\frac{1}{8}\right)} \lVert\nabla f(X)\rVert
$
- Table 1: Why metrics are changing between columns?
**Questions**:
**Reply**: MNLI-M and MNLI-MM represent two similar datasets for which the evaluation metric is accuracy. The MNLI dataset is a collection of sentence pairs annotated for textual entailment. MNLI-M is the matched split of the MNLI dataset, while MNLI-MM is the mismatched split.
Mcc stands for Matthews Correlation Coefficient, used for evaluating CoLA. The Pearson correlation is used for STS-B, and accuracy is the metric for other tasks. These metrics are widely used and are considered standard in the field, such as papers [22], [15], and [25].
### Questions:
1. Please address more carefully why you think you are novel with respect to [1]?
**Reply**: See response to W1
2. Why are fast retraction methods not sufficient?
**Reply**: See response to W1
3. How setting the parameters affect convergence rate?
**Reply**: See response to W3 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Simulation-Free Training of Neural ODEs on Paired Data | Accept (poster) | Summary: The paper revisits using neural ordinary differential equations (NODEs) for modeling deterministic maps on paired data, e.g., maps that solve regression and classification problems. The authors propose utilizing flow-matching (FM), a recent simulation-free training method for NODEs, to overcome the computational overhead of traditional NODE training and inference. The paper first presents the problems with simply using FM for learning maps on paired data since the learned map by FM is not guaranteed to preserve the coupling presented in training but only to preserve the distributions. Second, the authors propose to solve the coupling preservation problem by adding a learned encoder-decoder on the label space and an additional encoder on the data space to "rewire" the trajectories so that they do not cross. Then, the coupling will be preserved through FM training. The authors demonstrate the efficacy of their approach on classification and regression tasks and show that learning with FM also facilitates learning linear maps, which can be inferred in a single function evaluation - alleviating both training and inference shortcomings of traditional NODEs.
Strengths: - The paper presents an interesting approach towards end-to-end learning of NODEs with flow matching when preserving the training set coupling is required.
- Experiments validate the efficiency of the approach, achieving SOTA performance - both in metrics and runtime.
**Presentation**
- Ideas and concepts in the paper are often presented with provided intuition, guiding the reader to the logic behind the algorithmic choices.
Weaknesses: **Presentation**
- Although intuitive explanations were mentioned as a strength, in some cases, they come in short. The paper lacks formal and rigorous explanations of the method and experimental settings. For instance:
- Adding noise to labels (L164) - I am unsure I understand the setting here. Is the noise added to the ground truth labels in the L2 loss? Or are they added to the embeddings $z_1$? An equation explicitly stating the noise addition would greatly help clarify this.
- In the preliminaries section, the introduction of flow-matching is inaccurate and lacks the main point that FM does not regress to the marginal velocity field but rather regresses to a conditional velocity field, named linear dynamics in the paper, while the marginal is not necessarily linear.
**Method**
- A naive solution to the crossing trajectories problem, would be to augment the dimension of the learned flow such as suggested and discussed in [1,2] and is in a sense similar to training a conditional model, where the condition is the initial point. The paper misses the sanity check and comparison with the most naive baseline, combining augmented NODEs with flow matching. I wonder, if one only learns $d_\psi$ and $g_\varphi$, $f_\phi$ is set to be identity, and the learned flow is trained on an augmented space with the initial point, would that achieve better/worse results? According to [2], augmenting the flow model with the initial point would solve the crossing trajectories problem and allow the couplings to be preserved.
- I find the motivation for using NODEs for regression and classification rather weak when the learned map is linear (i.e., solved by a single NFE). In a sense, the motivation for using NODEs was to utilize "infinite" depth, in a shared weights manner, by learning a time-dependent function. But in the case presented in the paper, most of the "heavy lifting" in learning the representation is already done by the encoder-decoder neural nets, while the learned velocity could be thought of as a last layer represented by some non-linear function, separating the data, which is not linearly separable as shown by experiments in section 6.3, since as it appears from Table 1, there's not much of a performance gain in using NFE$>1$.
[1] [Augmented Neural ODEs](https://arxiv.org/abs/1904.01681), Dupont et al. (2019)
[2] [Augmented Bridge Matching](https://arxiv.org/abs/2311.06978), De Bortoli et al. (2023)
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can the authors provide a comparison to augmented NODEs+FM training? or provide and explanation as to why they think this may not work?
- Regarding the second point in the method weaknesses, I would be happy to extend the discussion on this point and think if there's some experiment to justify the use of NODEs here better.
Typos:
- In Figure 1 (c), the title says `NFE=1`, but trajectories are curved. Could there be a typo in the plot title?
- Figure 1 caption labels (c) and (d) are mislabeled as (b) and (c).
- Table 1, CIFAR10, RNODE, Throughput seems like a typo, value is like simulation free methods while it should be lower.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** The paper lacks formal and rigorous explanations of the method and experimental settings. For instance, adding noise to labels (L164).
**A1.** We appreciate the comment and will revise the presentations in the method section to be more clear in our final version of the manuscript. For adding noise to labels (L164), we let the label decoder reconstruct from noisy label embedding. Formally, \
$\mathcal{L}(\psi, \varphi) = \mathbb{E} [||d_{\psi}( g_{\varphi}(y) + \epsilon) - y||_2^2],$ where $ \epsilon \sim \mathcal{N}(0,\sigma^2)$.
>**Q2.** The introduction of flow-matching is inaccurate and lacks the main point. There are typos in Fig.1 and Tab. 1.
**A2.** We appreciate the reviewer’s effort to help make our manuscript rigorous and accurate. We will revise the description of Eq. (3) in Sec. 2, to clearly convey that the target velocity field $v_t$ is not a marginal velocity field but a conditional velocity field, which is defined by a per-sample basis. For typos:
- Fig. 1.(c) shows NFE for training, which is 1 in flow matching since it does not require a full trajectory to be calculated during training. We will revise the caption to clearly inform this.
- In Tab. 1, the throughput of RNODE on CIFAR10 is 0.19.
>**Q3.** The paper misses the sanity check and comparison with the most naive baseline, combining augmented NODEs with flow matching.
**A3.** To address the reviewer's concern, we have extended our analysis in Sec. 6.3, adding ANODE+FM (augmented NODE [1] with flow matching) to Tab. R.1 of the rebuttal PDF. Our model demonstrates higher classification accuracy and a lower disagreement ratio compared to this baseline. This is because our model can relax target trajectory crossing, which is a consequence of learning encoders by flow loss. Since the fundamental reason for crossing arises from predefined dynamics rather than insufficient dimensionality, simply allowing encoders to augment the dimension (like ANODE) doesn't effectively prevent the issue of target trajectory crossing.
>**Q4.** The motivation for using NODEs for regression and classification is rather weak when the learned map is linear (i.e., solved by a single NFE).
**A4.** We agree with the reviewer’s point that the motivation for using NODEs becomes weaker when the learned trajectory is solved by a single NFE. However, our method encompasses not only the linear dynamics but also any nonlinear dynamics that connects two endpoints $z_0$ and $z_1$. As demonstrated in Fig. 4 of our paper, utilizing nonlinear dynamics (e.g., convex and concave) yields models that clearly benefit from additional NFEs.
In these cases, the motivation for using NODE as an infinite depth model remains valid. NODEs offer unique advantages such as parameter sharing across depth and the ability to freely trade off performance and computation without retraining. These properties make NODEs appealing compared to conventional neural networks, indicating their value as a research topic.
While our preliminary analysis of nonlinear predefined dynamics (L289-310) concluded that linear dynamics perform better than our nonlinear choices (i.e., convex and concave), this work can be extended by carefully designing new nonlinear dynamics that outperform linear ones. Future research could even make the dynamics a learnable, data-dependent component (L325-328), allowing the model to choose between simpler dynamics for inference cost and more complex dynamics for performance. We believe our work can serve as a starting point for such attempts by addressing the primary concern of heavy computation costs during NODE training.
[1] Augmented Neural ODEs, Dupont et al. (2019)
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
I have a few follow-up questions:
1. Can the authors describe the setting of the ANODE+FM experiment? how did they handle the different dimensionalities?
2. "simply allowing encoders to augment the dimension (like ANODE) doesn't effectively prevent the issue of target trajectory crossing." I had in mind adding an unrestricted dimension that is not bound to the FM loss, so it relieves the predefined trajectory crossings.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the response.
In ANODE[1], zero padding is proposed as a method to augment the data dimension. Following this approach, in the ANODE+FM experiment, we applied different-sized zero padding to data and label to match the dimensionalities. As a result, data and label encoders contain no learnable parameters (because they are zero-padding), and we trained the dynamics function using only the flow loss.
Regarding the second question, we are a bit unsure about the suggested experimental setting and would appreciate it if the reviewer could provide further details, especially the meaning of “adding an unrestricted dimension that is not bound to the FM loss”. While we believe the current experiment represents one of the most straightforward approaches, we are open to extending our experiments if there is a more appropriate setting.
[1] Augmented Neural ODEs, Dupont et al., 2019
---
Rebuttal 2:
Comment: >**Q1**. Following the description of the experiment, it seems that the augmented dimension does not contribute to resolving crossings. (…) The experiment I had in mind, is to have the third dimension as a learned one (like encoder), but at the end, at inference the label is the first dimension of the output.
**A1.**
We appreciate the reviewer for clarifying the detailed experimental setup. As a more concrete baseline to avoid the crossing trajectory problem, we additionally employed the idea from the Augmented Bridge Matching (AugBM) [1] as suggested by the reviewer in the original response, which conditions the dynamics model on the initial point.
As suggested by the reviewer, we chose the data encoder $f$ as identity, and employed the label encoder $g_\varphi$ and decoder $d_\psi$ pre-trained by the reconstruction loss, whose latent dimension is matched with the data. Then, we trained the dynamics function $h_\theta$ on augmented input $(z_0, z_t)$ similar to AugBM [1] using flow loss $\mathbb{E}\_t||h\_\theta(z_0, z_t, t) - (z_1 - z_0)||$ until convergence. The result is given in **Table 1** below:
**Table 1**. Results
| | Train accuracy | Disagreement ratio |
| --- | --- | --- |
| Initial point conditioning | 45.50% | 33.47% |
| Ours | 98.80% | 0.02% |
While conditioning the dynamics function on the initial point can avoid the crossing of target trajectories in principle as in [1], in our preliminary study (Table 1) we observe that this approach suffers from under-fitting in practice. We conjecture that this is due to the increased variance of loss and gradient introduced by conditioning the dynamics function with an additional initial point, losing the Markovian property of the dynamics. The limitation is also discussed in the original paper [1] (page 9). In contrast, our method can jointly learn the encoders, which retains the Markovian property and hence reduces the variance. We appreciate the comment from the reviewer and add more thorough comparisons and analyses with these baselines in the draft.
[1] Augmented Bridge Matching, Bortoli et al. (2023) | Summary: This paper presents an approach for instantiating a flow matching method for paired data $(x, y)$ without relying on iterative ODE solvers. The method uses an input encoder with a pair of target decoder and encoder to project the original data into a latent space. By imposing the form of the dynamics in latent space, the trajectory of the latent vector between $x$ and $y$ can be represented using a closed-form equation. This approach demonstrates competitive or superior performance compared to methods that require iterative ODE solvers and diffusion-based models.
Strengths: - The idea and motivation are clearly exposed, with sufficient detail to understand the intuition behind the approach.
- This work is a good example of the effort to link intuitions from various domains together to create a clear and simple method.
- The results are interesting, providing evidence that nonlinear dynamics in latent space can be eliminated without compromising prediction accuracy.
Weaknesses: - I found that the presentation of the main Table 1 is not entirely clear to me. The main messages (smaller NFE vs. competitive results) are conveyed, but it is still a bit confusing to see the number of NFEs directly in the table for the methods requiring only one NFE.
- Minor issues: many abbreviations are defined multiple times throughout the text. Please be careful if the authors used a writing assistant to help with the drafting.
Technical Quality: 3
Clarity: 2
Questions for Authors: - This is an extra point for me: I am curious about the differences between this approach and discrete depth neural networks. If the dynamics between $z_0$ and $z_1$ can be expressed with an interpolation, we could say that this architecture is very similar to the discrete depth neural networks, where the processor in the latent space is a single linear layer with or without a skip connection. An example of such an architecture can be seen in Lusch et al. (2018). Did the authors try to compare the proposed method with such an architecture?
References:
- Lusch et al. (2018), Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications, https://www.nature.com/articles/s41467-018-07210-0
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I confirm that the authors have addressed sufficiently the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** Presentation issues in Table 1 and repeated definition of abbreviations in the main text.
**A1.** We thank the reviewer for highlighting these presentation issues. We acknowledge that some abbreviations (e.g., NFE or NODE) are defined repetitively. In the final version of our manuscript, we will revise the presentation of Tab. 1 and improve the readability of the main text to address these concerns.
>**Q2.** Comparison with the discrete depth neural networks, where the processor in the latent space is a single linear layer with or without a skip connection.
**A2.** We appreciate the reviewer for bringing our attention to this interesting related work. Our model with linear dynamics shares the high-level motivation with Lusch et al. (2018) [1], which is to find an embedding space that yields linear dynamics between source and target. It is interesting to see that two different approaches, namely flow matching (with optimal transport) and Koopman operator, converge on the same point. Regardless of the theoretical background, both approaches are appealing as they seek to interpret a nonlinear system within a well-studied linear framework.
At the same time, we have identified several differences between our work and the line of research based on Koopman operator theory. While those works mainly focus on a systematic way to obtain a linearized representation of the underlying nonlinear dynamics (with eigenfunction), our work aims to find a way to learn it in a simulation-free manner, avoiding the heavy computation of forward simulation (e.g., which appears in $\mathcal{L}_{lin}$ of [1]) from an initial state to an end state. Additionally, compared to the discrete depth neural networks that have a single linear layer processor, our proposed method is generally applicable to any nonlinear dynamics that connects two endpoints $z_0$ and $z_1$, exemplified as 'convex' or 'concave' in our paper (L289-L298). This implies that in our case, it is possible to have a latent trajectory as a curve in non-Euclidean geometry whenever the interpolated state $z_t$ is tractable.
[1] Deep learning for universal linear embeddings of nonlinear dynamics, Lusch et al. (2018) | Summary: The authors propose to use the flow matching loss, which directly matches the dynamics of a neural ODE (NODE) model to the pre-defined (simple) vector field, for supervision tasks. While the flow matching with simple linear vector fields is efficient, it cannot work well for supervision tasks because the paired data structure can require a crossing trajectories, which cannot be achieved by using the ODE + linear vector field. To overcome this issue, the authors propose to use the input and label encoders, and learning simple linear vector field in the latent space. To prevent learning trivial dynamics (e.g., ignoring data), they also introduce the label decoder and label reconstruction loss, which makes the output latent signal to be meaningful. The authors validate their approach with various supervision tasks, and show the proposed framework outperforms other competitors in terms of the cost-performance trade-off.
Strengths: At first glance, the proposed method seems to be too simple; using the auto-encoders to match the latent dynamics with simple one is somewhat straight-forward. However, such a simple approach can remarkably improve NODE-based models for supervision tasks, with a significant margin compared to baseline NODEs.
The paper is extremely well-written and easy to follow.
Weaknesses: While this paper provides useful insights into NODEs, such as the crossing trajectory problems and the use of latent dynamics, these are well-known topics within the community of NODEs. Therefore, I believe this paper should be evaluated based on its practical application rather than its theoretical aspects.
From the practical perspective, while the proposed technique outperforms some popular NODE-based supervision models, it exhibits significantly lower performance compared to the standard no-NODE-based baselines; the classification accuracy of 88.89% for CIFAR10 is not great. The fact that all NODE-based models failed at this task, and that the proposed model is better at least, does not bring much satisfaction (note that diffusion-based CARD model can estimate the uncertainty, though its performance is not great also).
I do not think that every model needs to achieve SOTA performance to be published. However, to at least have readers consider trying the proposed NODE-based framework instead of conventional finite-depth models (given that supervision is generally not approached using NODEs), I believe the proposed model should at least be compared to similar-sized non-NODE-based MLP and ResNet models.
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors experimentally demonstrated that input/label encoders do not become arbitrarily complicated (i.e., learn all the information on the given task), thus the latent dynamics play a sufficiently significant role for solving the task. Can the authors intuitively explain how this is possible?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors mention some limitations (e.g., assumming the underyling dynamics is linear) on the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** While this paper provides useful insights into NODEs, such as the crossing trajectory problems and the use of latent dynamics, these are well-known topics within the community of NODEs. Therefore, I believe this paper should be evaluated based on its practical application rather than its theoretical aspects.
**A1.** We agree with the reviewer that the problem of crossing trajectory and corresponding solution of using latent that augments data dimension is already discussed in previous NODE literature [1, 2]. They mainly discuss the approximation capability of NODEs when it has insufficient data dimension, where NODEs without dimension augmentation are revealed not to be a universal function approximator [2, 3].
Our observation, however, concerns **a different type of crossing trajectory problem** that arises when applying flow matching for simulation-free training of NODEs. This issue stems from **intersections in target trajectories induced by predefined dynamics**, rather than from an inherent limitation of NODEs.
The toy experiment in Fig. 1 (or Fig R.1 of the rebuttal PDF) illustrates this difference. As shown in Fig. 1(b), NODE successfully fits the data even without a dimension augmentation. However, when we introduce flow matching for simulation-free training, the predefined linear dynamics lead to trajectory crossing (Fig. 1(c)). Our key contribution is resolving this issue by inducing a valid velocity field (Fig. R.1 (d)) for the dynamics function to regress on.
>**Q2.** To at least have readers consider trying the proposed NODE-based framework instead of conventional finite-depth models (given that supervision is generally not approached using NODEs), I believe the proposed model should at least be compared to similar-sized non-NODE-based MLP and ResNet models.
**A2.**
To address this concern, we conducted an additional experiment comparing our method with a ResNet model. Our main experiments primarily aimed to compare our proposed method with NODE baselines, using a simple CNN backbone following the convention of NODEs [4]. While this CNN model sufficiently demonstrates our method and allows comparison with NODE baselines, we found that we can boost the performance of our model by using stronger backbones.
We customized the ResNet-18 architecture for our model and compared it with a ResNet model. Both models have approximately the same number of parameters (11.2M). Our model achieved a classification accuracy of 94.5%, matching the ResNet-18 model's performance of 94.6% with similar training costs.
Unlike conventional neural networks, NODEs have advantages of learning smooth, bijective continuous transformations between source and target. For instance, this property makes NODE-based classification models robust to adversarial data perturbation, as studied in TisODE [5]. However, it was previously difficult to consider NODEs as replacements for conventional neural networks in supervised tasks due to performance issues from inaccurate gradient estimation [6, 7] and high training costs associated with numerical ODE solvers. As our method matches the performance and training cost of conventional neural networks, we believe NODEs with simulation-free training could be now viable alternatives to consider.
>**Q3.** The authors experimentally demonstrated that input/label encoders do not become arbitrarily complicated (i.e., learn all the information on the given task), thus the latent dynamics play a sufficiently significant role for solving the task. Can the authors intuitively explain how this is possible?
**A3.**
We appreciate your thoughtful comment. Intuitively, the dynamics function plays a crucial role in solving tasks as it is a main component to be used in solving an ODE initial value problem from $z_0$ to $z_1$ during inference. Fitting the dynamics function to the target vector field is essential, while input and label encoders support this by constructing an embedding space that prevents target trajectory crossings with given data pairs and predefined dynamics. Therefore, the dynamics function effectively remains as the key element in solving the task.
[1] Augmented Neural ODEs, Dupont et al. (2019)
[2] Dissecting Neural ODEs, Massaroli et al. (2020)
[3] Approximation Capabilities of Neural ODEs and Invertible Residual Networks, Zhang et al. (2020)
[4] Neural Ordinary Differential Equations, Chen et al. (2018)
[5] On Robustness of Neural Ordinary Differential Equations, Yan et al. (2019)
[6] Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE, Zhuang et al. (2020)
[7] MALI: A memory efficient and reverse accurate integrator for Neural ODEs, Zhuang et al. (2021)
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' thoughtful response. I will be increasing the review score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the constructive feedback and reassessment of our work. We will make sure to incorporate all discussions into our next revision. | Summary: This paper develops the Flow Matching (FM) algorithm to connect paired data. Due to the issue of crossing trajectories, FM in the data space cannot perfectly match associated pairs. To address this, the authors perform FM in an embedded space. Ultimately, they encode source and target data through an encoder end-to-end and learn FM loss in the embedded space. The main application is image classification.
Strengths: The paper begins with a reasonable motivation and is well-presented, making it easy to follow.
Weaknesses: - While the motivation is to avoid incorrect pair connections due to crossing trajectories in the data space, embedding data into a latent space does not guarantee the prevention of trajectory crossings. I recommend authors to visualize (on toy data, or on real data) that the proposed method experimentally prevents crossing trajectories. Moreover, it would be valuable if authors show trajectory crossing occurs in real world scenarios by comparing the results with FM. For example, it would support the motivation and also the proposed method if the proposed method outperforms FM in connecting paired data (in real-world data).
- The improvement in accuracy may be due to the additional embedding networks rather than resolving the issue of trajectory crossings. Therefore, it is recommended that the authors conduct experiments and visulalize/demonstrate crossing issue can be resolved in a learned latent space on simple low-dimensional toy data.
- The comparison group is weak. There are algorithms like ANODE [1] and FFJORD [2] designed to connect paths with simpler trajectories. It would be beneficial to compare these algorithms as well.
[1] Augmented Neural ODEs, NeurIPS, 2019.
[2] FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models, ICLR, 2019.
Technical Quality: 2
Clarity: 3
Questions for Authors: - In the real world scenario (on the classification/regression task presented in the paper), I am not sure if the crossing trajectory is crucial point that influence the performance. I am curious that if authors can compare their models with FM (well, there may be issues on matching source and target dimensions equal when impelmenting FM).
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1.** Embedding data into a latent space does not guarantee the prevention of trajectory crossings. The improvement in accuracy may be due to the additional embedding networks rather than resolving the issue. Please show that the proposed method experimentally prevents the issue.
**A1.** We would like to first clarify that while an embedding space alone does not prevent trajectory crossings, the flow loss (Eq. (5)) encourages non-crossing trajectories. As described in our paper (L128-130), this objective is optimized when encoders induce non-crossing target trajectories. As the flow loss remains high if the target trajectory has intersections (a dynamics function should struggle to fit on multiple velocities simultaneously), jointly optimizing the encoders with the dynamics function lets the encoders to relax trajectory crossings by adjusting the embeddings.
To visualize this, we have extended the 2D toy experiment from Fig. 1, comparing trajectories learned by NODE, flow matching, and our method. Results are presented in Fig. R.1 of the rebuttal PDF. The ground truth coupling (Fig. R.1(a)) with linear predefined dynamics induces multiple points of intersection, causing naive flow matching with an identity encoder to fail in preserving the original coupling (Fig. R.1(c)). Our method (Fig. R.1(d)) successfully learns an embedding space inducing non-crossing target trajectories, correctly fitting the data with proper coupling.
As demonstrated in the toy experiment, we believe that our model mainly benefits from solving trajectory crossing, rather than additional architectural components. In fact, we use the same data encoder $f_\phi$ (which precedes the dynamics function) for all baselines in our experiments (Sec. 6.2-6.3), to ensure fair comparison. This results in a roughly same architecture for all methods in the experiment, where the only difference comes from the label encoder used in our method only to achieve simulation-free training, and not used during inference. The ablation study in our paper (L257-276) also supports our claim that accuracy improvement results from avoiding crossing trajectory. We further extended the study in **Q3**, to clearly show that our proposed method benefits from mitigating crossing trajectory.
Lastly, we kindly note that the performance gain compared to NODE baselines may come from a precise gradient calculation (L232-234), as NODE baselines using adjoint sensitivity methods can suffer from inaccurate gradient estimation [1, 2].
>**Q2.** The comparison group is weak. (Comparison with ANODE and FFJORD)
**A2.** For more comprehensive comparison, we compared with ANODE [3] baseline, which utilizes zero-padding to increase data dimension (Tab. R.2). Similar to as discussed in Sec. 6.2, our method consistently outperforms ANODE in training cost, test accuracy, and performance in the low NFE regime. We will include these results in our final manuscript.
Regarding FFJORD [4], we found that its main focus is on improving continuous normalizing flow in terms of computational efficiency, rather than encouraging simpler trajectories. Thus, we would be happy to hear from the reviewer about how further baselines could be added, so that we can improve the presentation of empirical results. Besides, we believe that RNODE [5], which explicitly regularizes trajectories, already serves as a strong baseline, demonstrating fairly good performance in the low NFE regime for SVHN image classification (Tab. 1).
>**Q3.** In the real world scenario, I am not sure if the crossing trajectory is a crucial point that influences the performance. (comparison with FM)
**A3.** To address the reviewer’s concern about the importance of crossing trajectories in real-world scenarios, we compared two naive FM baselines on CIFAR10 while matching the source and target dimensions: one using a zero-padding encoder (ANODE+FM) and another using a learnable encoder which is trained by autoencoding objective then frozen during flow loss optimization (Autoencoder+FM). The analysis below expands our analysis in Sec. 6.3 (L257-276).
Tab. R.1 shows that the disagreement ratio and classification accuracy (measured on the training set) are significantly affected by learning encoders with our proposed objective. As discussed in our paper (L267-272), we could identify a trajectory crossing by high disagreement ratio between one-step and multi-step prediction result. Our observation that naive FMs show high disagreement ratios suggests that they cannot properly resolve target trajectory crossing, thereby failing to fit the training dataset (low accuracy). Our model, however, achieves high accuracy by preventing most trajectory crossings, showing low disagreement ratio.
We also measured the accuracy of predicted velocity, similar to the flow loss (Eq. (5)), by replacing MSE with cosine similarity to disregard the magnitude of the target velocity, which depends on the learned embedding space. If a target trajectory crossing problem occurs, the velocity prediction near the intersection point shows low cosine similarity. As shown in Fig. R.2, naive FM baselines suffer from crossings near endpoints, resulting in low cosine similarity. Our model mitigates this issue, consistently showing high cosine similarity across the entire range of $t$.
By comparing our method with naive FM baselines, we observe that trajectory crossing is indeed crucial in real-world scenarios, and our proposed method benefits from effectively preventing it.
[1] Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE, Zhuang et al. (2020)
[2] MALI: A memory efficient and reverse accurate integrator for Neural ODEs, Zhuang et al. (2021)
[3] Augmented Neural ODEs, Dupont et al. (2019)
[4] FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models, Grathwohl et al. (2019)
[5] How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization, Finlay et al. (2020)
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. To be clear, I believed it is questionable whether it's truly possible to accurately model the straight trajectory in a situation where the Flow Matching (FM) loss and Embedding loss (AE loss) are mixed. I appreciate the authors for the additional visualization of the learned trajectory. However, I still have some concerns on the method and experiments.
First of all, this work does not develop a loss function or methodology to explicitly straighten the trajectory, as seen in [1] or [2]. Instead, the trajectory is implicitly straightened in the latent space because creating non-crossing trajectories is advantageous for minimizing FM loss in the embedding space. However, I believe there is a lack of theoretical evidence to support the claim that the trajectory is accurately linearized in a scenario where multiple losses are mixed. Moreover, it seems that much depends on the expressivity of the Encoder-Decoder in the embedding space, which raises concerns. I am particularly concerned that the ability to straighten the trajectory highly depends on the expressivity of the embedding network.
Furthermore, I believe that the concept of using embedding to straighten the trajectory by FM is not particularly novel at this moment. The approach of learning through Flow Matching via latent embedding has been already discussed many recent works, including [3] and [4]. These works also discuss about the straightened latent trajectory and showed good performance in high-dimensional experiments, which leads me to believe that the contribution of this paper is somewhat limited.
Minor point: In the UCI regression task, the RNODE with Dopri solver (which also straightens the trajectory in the data space) demonstrates better performance than proposed method. I believe that the proposed method should show a noticeable performance improvement over these comparisons.
For these reasons, I believe this paper has limited contribution, hence, will keep my score to 4.
**References**
[1] Liu, X., Gong, C., & Liu, Q. (2022). Flow straight and fast: Learning to generate and transfer data with rectified flow.
[2] Lee, S., Kim, B., & Ye, J. C. (2023, July). Minimizing trajectory curvature of ode-based generative models.
[3] Fischer, J. S., Gui, M., Ma, P., Stracke, N., Baumann, S. A., & Ommer, B. (2023). Boosting Latent Diffusion with Flow Matching.
[4] Dao, Q., Phung, H., Nguyen, B., & Tran, A. (2023). Flow matching in latent space.
---
Rebuttal 2:
Title: Official Comment by Authors (1/3)
Comment: We appreciate the opportunity to clarify our key claims and contributions. Our primary aim is to develop a method for training NeuralODE models on paired data in a simulation-free manner. To achieve this, we adopt a flow matching framework. Upon identifying that a naive form of flow matching results in crossing target trajectories, we introduce an embedding space that is learned end-to-end with flow loss. Moreover, our method accommodates a general interpolation form as predefined dynamics, not limited to linear dynamics.
> **Q1**. However, I believe there is a lack of theoretical evidence to support the claim that the trajectory is accurately linearized in a scenario where multiple losses are mixed. [...] I am particularly concerned that the ability to straighten the trajectory highly depends on the expressivity of the embedding network.
**A1**. To theoretically support our claim that our method can learn embeddings with non-crossing latent trajectories with combination of flow matching and autoencoding losses, we offer a formal proof here.
In fact, we can show a general result for all trajectories of the form $\mathbf{z}_t= \alpha_t \mathbf{z}_0+ \beta_t\mathbf{z}_1$, which includes non-linear trajectories in Sec. 6.3.
Suppose that we have a paired dataset $\mathcal{D}=\lbrace\mathbf{x},\mathbf{y}\rbrace_{i=1}^{N}$ that consists of data $\mathbf{x}\in \mathbb{R}^{d_x}$ and label $\mathbf{y}\in \mathbb{R}^{d_y}$.
We have a data encoder $f_\phi$ and label encoder $g_\varphi$ that transforms data and labels to latent $\mathbf{z}_0, \mathbf{z}_1 \in \mathbb{R}^{d}$, where $\mathbf{z}_0=f\_\phi(\mathbf{x})$ and $\mathbf{z}_1=g\_\varphi(\mathbf{y})$, respectively.
Here we assume $d>d_x, d_y$. We also have a pre-defined dynamics $F(\mathbf{z}_0,\mathbf{z}_1,t) = \alpha_t \mathbf{z}_0+ \beta_t\mathbf{z}_1 =\mathbf{z}_t$. We assume that $\alpha_t$ and $\beta_t$ are smooth and nonzero except for $t=0$ and $t=1$.
Formally, we say that the encoders $(f\_\phi,g\_\varphi)$ induce a target trajectory crossing if there exists a tuple $(t,\mathbf{x},\mathbf{y},\mathbf{x}^\prime,\mathbf{y}^\prime)$ such that $\alpha_t f_\phi(\mathbf{x})+ \beta_t g_\varphi(\mathbf{y})=\alpha_t f_\phi(\mathbf{x}^\prime)+ \beta_t g_\varphi(\mathbf{y}^\prime)$ for $\mathbf{x}\neq\mathbf{x}^\prime$ and $\mathbf{y}\neq\mathbf{y}^\prime$.
**Proposition 1.** There exist $(f_\phi,g_\varphi)$ that always induce non-crossing target trajectory while minimizing the label autoencoding loss.
**Proof.**
Let the latent space constructed by a set of basis $\mathbb{I}=\lbrace\mathbf{e}\_1,\mathbf{e}\_2,…, \mathbf{e}\_d\rbrace$. Since $d>d_y$, we can find a label encoder $g_\varphi$ such that utilizes $k$ basis $\mathbb{J}=\lbrace\mathbf{e}\_1,\mathbf{e}\_{2},…, \mathbf{e}\_k\rbrace$ ($d>k\geq d_y$ ) and minimizes the autoencoding loss (i.e., $g_\varphi(\mathbf{y})=g_\varphi(\mathbf{y}')$ iff $\mathbf{y}=\mathbf{y}'$). Also, we can find a data encoder $f_\phi$ such that $\text{proj}\_{\text{span}(\mathbb{K})}f\_\phi(\mathbf{x}) = \text{proj}\_{\text{span}(\mathbb{K})}f\_\phi(\mathbf{x}')$ iff $\mathbf{x}=\mathbf{x}'$, where $\mathbb{K}=\lbrace\mathbf{e}\_{k+1}, ..., \mathbf{e}\_{d}\rbrace$.
Then, suppose that there exists a tuple $(t,\mathbf{x},\mathbf{y},\mathbf{x}^\prime,\mathbf{y}^\prime)$ such that $\alpha_t f_\phi(\mathbf{x})+ \beta_t g_\varphi(\mathbf{y})=\alpha_t f_\phi(\mathbf{x}^\prime)+ \beta_t g_\varphi(\mathbf{y}^\prime)$, i.e., $\alpha_t (f_\phi(\mathbf{x})-f_\phi(\mathbf{x}'))+ \beta_t( g_\varphi(\mathbf{y})- g_\varphi(\mathbf{y}'))= \mathbf{0}$.
Since $g_\varphi(\mathbf{y})- g_\varphi(\mathbf{y}') =\mathbf{0}$ iff $\mathbf{y} = \mathbf{y}'$ and
$\text{proj}\_{\text{span}({\mathbb{K})}}(f\_\phi(\mathbf{x})-f\_\phi(\mathbf{x}'))=\mathbf{0}$ iff $\mathbf{x}=\mathbf{x}'$ by construction, such tuple does not exist. Therefore, there exists $f_\phi, g_\varphi$ such that does not induces target trajectory crossing, while minimizing the autoencoding loss.
This finishes the proof.
We particularly note that **Proposition 1** does not require highly expressive data encoder $f_\phi$, as it only requires $f_\phi$ to be injective (in the subspace $\text{span}(\mathbb{K})$ not utilized by the label encoder).
In addition, while $d_x$ and $d_y$ are dimensions in observation space, we conjecture that the latent dimension $d$ can be made smaller if the data lives on a low-dimensional manifold.
---
Rebuttal 3:
Title: Official Comment by Authors (2/3)
Comment: We now show that, if the label encoder is injective (e.g. by the label autoencoding loss), then minimizing the flow loss is equivalent to learning the data and label encoder to induce non-crossing target trajectory, and learning the dynamics function to fit the induced trajectories.
**Proposition 2.** If $g_\varphi$ is assumed to be injective, the following equivalence holds: $(f_\phi,g_\varphi, h_\theta)$ minimizes the flow loss $\|h\_\theta(\mathbf{z}\_t, t)-\frac{d}{dt}\mathbf{z}\_t\|$ to $0$ for all $t\in[0, 1)$ $\Longleftrightarrow$ $(f_\phi,g_\varphi)$ always induce non-crossing target trajectory and $h_\theta$ perfectly fits the induced target velocity.
**Proof.**
($\Longleftarrow$) If $(f_\phi,g_\varphi)$ always induce non-crossing target trajectory, there is a well-defined target velocity $\frac{d}{dt}\mathbf{z}\_t$ at every
$\mathbf{z}\_t$ which is continuous on $t$. If $h_\theta$ perfectly fits this target velocity for all $(\mathbf{z}\_t, t)$, the flow loss is $0$.
($\Longrightarrow$) We prove by contradiction. Suppose the flow loss is at $0$ and there is a crossing trajectory, i.e. some
$(t,\mathbf{x},\mathbf{y},\mathbf{x}^\prime,\mathbf{y}^\prime)$ that
$\mathbf{z}\_t=\mathbf{z}'\_t$ for $\mathbf{x}\neq\mathbf{x}^\prime$ and $\mathbf{y}\neq\mathbf{y}^\prime$. Since the loss is $0$ $\forall t\in[0, 1)$, the dynamics function $h_\theta$ must output $\frac{d}{dt}F(\mathbf{z}_0,\mathbf{z}_1,t)$ at $\mathbf{z}_t$, and $\frac{d}{dt}F(\mathbf{z}'_0,\mathbf{z}'_1,t)$ at $\mathbf{z}'_t$. This is a contradiction since at the point of crossing we have $\mathbf{z}_t=\mathbf{z}'_t$ but $\frac{d}{dt}F(\mathbf{z}_0,\mathbf{z}_1,t)\neq\frac{d}{dt}F(\mathbf{z}'_0,\mathbf{z}'_1,t)$.
This finishes the proof.
The above theoretical evidence aligns with our empirical results, demonstrating the effectiveness of our mixed loss approach. We will include the proof in the final version of our manuscript to strengthen our claim.
---
Rebuttal Comment 3.1:
Title: Official Comment by Authors (3/3)
Comment: >**Q2**. The approach of learning through Flow Matching via latent embedding has been already discussed in many recent works. These works also discuss the straightened latent trajectory and showed good performance in high-dimensional experiments, which leads me to believe that the contribution of this paper is somewhat limited.
**A2**. As discussed in L126-127 in the paper as well as in our previous response, we would like to clarify that our work has important differences to the prior works that apply the flow matching on the latent space.
Firstly, our method proposes to **learn the embedding space jointly with the dynamic function to minimize the flow matching loss in an end-to-end manner**. This contrasts with recent works that utilize a **fixed** latent embedding (e.g., ones suggested by the reviewer [3,4]), typically obtained by a pretrained VQ encoder. As discussed in our paper (L257-276) and our response to reviewer 4GBu, a fixed latent does not resolve the issue of target trajectory crossing, which motivates the introduction of learnable encoders. Please note that we also empirically compared the proposed method to flow matching baseline with fixed embedding in Tab. 2 in the paper and also in our rebuttal response **A3** above, demonstrating that learning the data embedding is crucial in our problem.
Secondly, we clarify that there is a notable difference on problem settings considered in our work and the prior works on latent flow matching. We focus on learning a deterministic mapping between paired data, which introduces an **important constraint of preserving the original coupling of data pairs**. This specific formulation renders certain recent techniques for straightening trajectories in generative tasks, such as reflow [3] that straightens the paths by altering the initial coupling, inapplicable to our setting since it breaks the original coupling of data pairs. This also motivated us to **learn** latent embedding to straighten the path while preserving the coupling.
[1] Boosting Latent Diffusion with Flow Matching, Fischer et al., 2023
[2] Flow matching in latent space, Dao et al., 2023
[3] Flow straight and fast: Learning to generate and transfer data with rectified flow, Liu et al., 2022 | Rebuttal 1:
Rebuttal: Dear reviewers,
We appreciate the constructive feedback provided by all reviewers, which has significantly contributed to the improvement of our paper. We are encouraged by the positive recognition our paper has received, including:
- "begins with a reasonable motivation and is well-presented" (YTiC),
- is "a simple approach that can remarkably improve NODE-based models for supervision tasks" (4GBu),
- is "a good example of the effort to link intuitions from various domains together to create a clear and simple method" (RQFX),
- "presents an interesting approach" for "NODEs with flow matching when preserving the training set coupling is required" (bNYt).
In response to your valuable feedback, we have thoroughly revised our manuscript, addressing your concerns across several aspects.
Please find our reviewer-specific feedback below. We look forward to any further comments or discussion.
Pdf: /pdf/98317f22b9ea912638b09caa5540c9560c579840.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trap-MID: Trapdoor-based Defense against Model Inversion Attacks | Accept (poster) | Summary: The paper proposes Trap-MID, a novel defense method against model inversion attacks, drawing inspiration from backdoor attacks and shortcut-based defenses against adversarial examples. The core idea involves adding poisoned data samples to the target model's training data. This data poisoning is based on the Blended backdoor attack, which calculates a linear combination of the training sample and a noise trigger pattern. To reduce the visibility of the trigger, a discriminator is trained in parallel, akin to a GAN training setup. This discriminator is then used to align the trigger patterns with the distribution of the clean training samples. The proposed defense method is empirically evaluated against common white-box attacks (GMI, KED-MI, LOMMA, PLG-MI) and defense methods (MID, BiDO, NegLS) using the standard 64x64 CelebA training samples.
Strengths: - The paper proposes an intriguing direction for model inversion defenses by utilizing backdoor attacks to induce shortcuts in the model. These shortcuts are then exploited by the attack, resulting in misleading attack outcomes. This approach is both clever and conceptually straightforward, which is advantageous.
- The paper is well-written, with all components and sections clearly described. The experimental evaluation is also well-defined, adhering to the standard procedures in model inversion literature.
- The results presented in the evaluation section are convincing. The experiments demonstrate the method's favorable privacy-utility trade-off and its effective defense capabilities. Additionally, the paper investigates adaptive attacks, a critical aspect of assessing a method's effectiveness. Although the adaptive attack still achieves good results against the proposed defense, this does not diminish the contribution's value. The Appendix further provides an extensive ablation and sensitivity analysis, essential for understanding the impact of hyperparameter selection and individual design choices.
Weaknesses: - Although the evaluation is conducted on four attack algorithms, two important aspects are missing in my opinion. First, the proposed defense should also be tested against black-box/label-only attacks. In particular, testing the defense method against the label-only BREP-MI [1] attack would be very interesting since this method aims to reconstruct samples by maximizing their distance to the decision boundaries. Given that the proposed Trap-MID leverages a model's ability to exploit shortcuts in its predictions, I am curious if the poisoned shortcut samples are differently placed in a model's embedding space. It might be the case that these samples are actually closer to the decision boundary compared to clean training samples. If this is true, the defense might fail. So even if white-box attacks are usually considered stronger, running some label-only attacks could add an intriguing perspective to the paper.
- Similarly, the defense is only evaluated in a low-resolution setting. However, it is important to also investigate a high-resolution setting, which can be considered more practical. In this context, the method should also be tested against PPA [2] to provide a comprehensive evaluation and see if the promised defense effect holds in this setting. I want to emphasize that I am not requesting additional experiments merely as a form of criticism, but because investigating these two additional attacks would add significant value to the paper, helping to support the claims.
- The evaluation heavily relies on attack accuracy and KNN Distance, both computed on an evaluation model trained on the target dataset. However, I think these metrics are limited as they tell us little about the actual visual similarity between reconstructed results and the training samples. For example, in Fig. 2, the images reconstructed from the NegLS model look unrealistic and reveal only limited information about the target identity. Still, the Attack Accuracy in Tab. 1 is high for PLG-MI and the KNN-Dist is low. Both metrics seem very susceptible to adversarial noise and misguided attack results. Similarly, the FID score only assesses the image quality of the reconstructed samples and compares it to the training data. But this metric has no meaning regarding privacy leakage: generating samples of the same style as the target training data leads to a low FID, even if the images reveal no private attributes. Conversely, images following a substantially different distribution can still reveal private features, even if the FID score is high. Additional metrics are therefore necessary to assess the actual effectiveness. From literature, one could use a model like CLIP or for the face recognition setting, a FaceNet model to assess the identity similarity between attack results and training samples (see, e.g., [2]). Another option would be the knowledge extraction score introduced in [4], which measures the information content included in the attack results.
- While I appreciate that the paper includes a theoretical analysis of the method and its effectiveness, I have some doubts about the formal proof. Particularly, I do not think that the KL divergence is a valid measure here to assess the trapdoor visibility. The problem is that the KL divergence compares two distributions. However, similar distributions might not necessarily mean that the triggers are invisible, and vice versa. For example, one could use triggers that are clearly visible but follow the clean data distribution. For example, using physical triggers instead of noise, but also noise patterns could be designed to follow the clean data distribution. This limits the expressiveness of the theorem since it relies on the KL divergence. In my view, the analysis should use a sample-wise measurement that compares a clean sample directly with its poisoned counterpart to provide a reliable measurement of trigger visibility.
- The work of [6] investigates a similar direction (using surrogate samples to misguide the attack). While their approach is partly limited in the number of classes to defend, I think the approach should at least be discussed in the related work section since the underlying idea of providing false guidance for the attack is conceptually similar.
Small Remarks:
- L118: "Given access to the target classifier f, the adversary aims to recover the private data of class y by estimating the posterior distribution p(X|y)..." -> While this definition is not wrong, I think model inversion attacks do not necessarily require the adversary to recover the whole posterior distribution; reconstructing a single sample can be enough for a serious privacy breach. While there exist attacks directly aiming to reconstruct the posterior, e.g., VMI [3], this might not be true for all attacks.
- There is another recent defense that might be included in the related work section [5]. The paper was very recently accepted at CVPR and only became available close to the NeurIPS deadline. So, I do not expect the paper to include this work in its evaluation. Just wanted to highlight this related work.
- L517: I think the URL is the wrong one here, as it leads to the KED paper.
- L525: If I have not missed it, the paper never actually defines the evaluation model architecture. Defining this architecture once helps the reproducibility of the paper.
-> I encourage the authors to participate in the rebuttal and will increase my scores if the weaknesses mentioned are addressed adequately.
References:
[1] Kahla et al., Label-Only Model Inversion Attacks via Boundary Repulsion. CVPR 2022
[2] Struppek et al., Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. ICML 2022
[3] Wang et al., Variational model inversion attacks. NeurIPS 2021
[4] Struppek et al., Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks. ICLR 2024
[5] Ho et al., Model Inversion Robustness: Can Transfer Learning Help? CVPR 2024
[6] Chen et al. Data-Centric Defense: Shaping Loss Landscape with Augmentations to Counter Model Inversion. ICML Workshop 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can inference-time mitigation strategies from the backdoor or shortcut literature be employed to eliminate the model's shortcut behavior, thereby enabling stronger attacks?
- Table 6 in the appendix: Why does the attack performance increase for other attacks besides PLG-MI when the defense uses the trapdoor loss and a discriminator loss compared to fixed triggers? Is there any intuition behind these results?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Section A.2. provides a comprehensive limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and valuable feedback. We address the specific weaknesses and questions raised in your comments below:
**W1: The defense should be tested against black-box/label-only attacks**
The experiments show that BREP-MI, a label-only attack, is ineffective on Trap-MID, requiring over 820,000 iterations to initialize latents for all identities and getting 0% attack accuracy in untargeted attacks recovering 300 targets, which indicates Trap-MID's efficacy. Detailed results are available in Table 1 of the attached file in Author Rebuttal. We will add this analysis to our paper.
**W2: It is important to investigate a high-resolution setting**
We add the experiments against PPA for a high-resolution setting. Due to time constraints, we modified the attack to optimize 20 samples and select 5 in the final stage (PPA originally optimized 200 samples and selected 50).
The following table shows the defense performance on DenseNet-169. Although Trap-MID doesn't fully mitigate PPA, it preserves privacy to a certain degree without an accuracy drop. Besides, increasing trapdoor loss weight or combining Trap-MID with NegLS can improve the defense further. We discuss more about the defense combination in Q1.
|Defense|Acc ↑|AA-1 ↓|AA-5 ↓|$\delta_{face}$ ↑|$\delta_{eval}$ ↑|FID ↑|
|-|-|-|-|-|-|-|
|Trap-MID|89.98|59.48|72.22|0.9468|182.90|49.64|
|$\beta$=0.5|83.39|11.78|19.24|1.3985|269.99|63.04|
|w/ NegLS|83.56|**1.60**|**4.90**|**1.4744**|**279.68**|**79.37**|
Results including other models can be found in Table 2 of the attached file. We regret that we cannot conduct hyper-parameter tuning on other models due to time constraints. We will add this analysis to our paper.
**W3: The evaluation metrics are limited**
We acknowledge that each metric has limitations, yet attack accuracy/KNN distance and FID can complement each other. The formers estimate the extracted attributes, while the latter measures the naturalness and styling similarity. However, we agree that a universal metric would be valuable for a straightforward comparison.
Our PPA experiments at W2 include additional metrics such as FaceNet's feature distance used by PPA and improved precision, recall, density, and coverage used by VMI. We also use them to evaluate PLG-MI's results for a more comprehensive analysis. The results are available in Tables 2 and 3 of the attached file. We will add this analysis to our paper.
**W4: The analysis should use a sample-wise measurement to provide a reliable measurement of trigger visibility**
Thank you for highlighting this concern. The "trapdoor visibility" in Definition 2 might be misleading. We use KL divergence to ensure that triggered data is natural enough to be generated in attacks. Making them similar to the original images is one method to achieve this objective. If triggered data is similar enough to its counterpart, we have
$$\text{If } \forall x \in X,\\; \log{p(x)} - \log{p(\Pi(x))} \le \epsilon,\text{ then } D_{KL}(p(X)||p(\Pi(X))) \le \epsilon,$$
which fulfills Definition 2.
We believe this theoretical analysis suggests the potential of various trigger designs. For instance, if anyone in a green shirt is classified as identity 1, the attacks could be misled into manipulating shirt colors.
**W5: Missing related work**
Thank you for mentioning these works. We will add them to Related Work.
**W6: The definition of objective might not be true for all attacks**
This definition doesn't mean that attacks aim to recover the entire posterior distribution. Instead, the attack relies on such distribution. Most MI attacks guide optimization by identity and prior loss, maximizing $p(y|X)$ and $p(X)$. According to Bayes' theorem, given target label $y$, the attack process can then be viewed as maximizing the posterior probability $p(X|y)$.
We will refine the description here to make it clearer.
**W7: The link to NegLS's implementation**
Since NegLS hadn't released their code when we conducted experiments, we adapted KED-MI's code to implement NegLS by ourselves. We will include the link to our source code after releasing it.
**W8: Missing evaluation model's architecture**
Following GMI, the evaluation classifier is Face.evoLVe with an input size of 224x224. We will add this information to our paper.
**Q1: Can inference-time mitigation strategies from the backdoor/shortcut literature eliminate the model's shortcut behavior?**
If such mitigations are differentiable, one method is to apply them before feeding synthetic data into the victim model. Our work is a first step in trapdoor-based defense. Future work could explore the impact of recent backdoor/shortcut techniques on both attacks and defenses.
Besides, we found that combining Trap-MID and NegLS further improves defense, suggesting that Trap-MID can complement existing methods. Intuitively, NegLS makes it harder to extract private data and therefore makes trapdoors more attractive. A future direction explores hybrid defense to counter specific adaptive attacks.
|Attack|Defense|Accuracy ↑|AA-1 ↓|AA-5 ↓|KNN Dist ↑|FID ↑|
|-|-|-|-|-|-|-|
|LOMMA (KED-MI)|Trap-MID|81.37|61.25|85.76|1404.77|24.19|
||w/ NegLS|77.10|**42.47**|**70.64**|**1521.82**|**37.22**|
Results against other attacks can be found in Table 4 of the attached file. We will add this analysis to our paper.
**Q2: Why does the attack performance increase for other attacks besides PLG-MI when using trapdoor loss and discriminator loss?**
This may come from the capacities of GAN in attacks.
Trapdoor loss search for easier triggers for the target model to retain utility. However, these crafted triggers may be harder to generate, leading to a lower defense performance against weaker generators.
Discriminator loss encourages invisible triggers, which is essential to deceive stronger discriminators like PLG-MI. However, generating invisible triggers requires fine-grain adjustment, making the weaker generators less likely to be misled.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing additional details and experiments. Many of my remarks and weaknesses have been addresses. Yet, some weaknesses and questions remain:
1.) I still believe the evaluation metrics are limited, using only the evaluation model + the FID score (see my initial review). I also do not agree that the FID score actually complements the other two metrics, since it has no clear implication on privacy leakage. Why not using something like FaceNet also for the 64x64 experiments?
2.) "Since NegLS hadn't released their code..." -> I just checked, the authors refer to the code in their paper and the corresponding Github repo also seems to provide configurations for training models with negative LS. Yet, it seems like the authors implemented the defense by themselves, which should also be fine.
3.) The impact of Trap-MID in defending in the high-resolution setting against PPA seems limited. I agree that it offers a nice improvement over the negative LS defense, but on its own the defense seems rather weak.
4.) I also agree with Reviewer P6SC that the runtime is an important aspect. While the runtime is still somewhat reasonable, it limits the approach to some extent.
5.) Regarding the evaluation model, is the input size of the Face.evolve model really 224x224? It seems somewhat strange that the evaluation model for the 64x64 setting requires an upscaling of factor 4 on the attack samples.
Overall, after reading all reviews and the corresponding rebuttals, I decided to keep my initial positive score.
---
Rebuttal 2:
Comment: Thank you for your thoughtful questions and feedback. We address them as follows:
**Q1: Why not use something like FaceNet also for the 64x64 experiments?**
The FaceNet evaluation of PLG-MI's 64x64 experiments can be found in Table 3 of our attached file at Author Rebuttal, which includes the following additional metrics:
1. **$\delta_{face}$**: The **FaceNet feature distance** between each recovered image and the nearest private data. A higher value indicates less similarity to the private data.
2. **Improved Precision**: Measures whether each recovered image lies within the estimated manifold of private data in the **InceptionV3 feature space**. A lower value indicates less similarity to the private data.
3. **Improved Recall**: Evaluates whether each private image lies within the manifold of recovered data in the **InceptionV3 feature space**. A lower value suggests that private data is less likely to be reproduced by the generator.
4. **Density**: Quantifies how many private-sample neighborhood spheres contain each recovered image in the **InceptionV3 feature space**. A lower value indicates less similarity to the private data.
5. **Coverage**: Assesses how many private samples have a neighborhood sphere containing at least one recovered image in the **InceptionV3 feature space**. A lower value suggests that private data is less likely to be reproduced by the generator.
As shown in the results below, Trap-MID outperforms existing defenses in FaceNet distance and ranks second to NegLS in most other metrics.
|Defense|Acc ↑|$\delta_{face}$ ↑|Precision ↓|Recall ↓|Density ↓|Coverage ↓|
|-|-|-|-|-|-|-|
|-|87.83|0.6110|19.39|13.17|0.0893|0.1498|
|MID|76.67|0.6410|21.25|33.96|0.0913|0.1734|
|BiDO|79.62|0.7058|20.17|10.17|0.0807|0.1367|
|NegLS|81.76|0.7587|**3.80**|**0.00**|**0.0244**|**0.0189**|
|Trap-MID|81.62|**1.3845**|9.56|71.63|0.0328|0.0728|
We notice that Trap-MID's ability to classify arbitrary images with injected triggers as corresponding classes results in more diverse recovered images, leading to a broader manifold and higher recall. Besides, all metrics except FaceNet distance utilize the same InceptionV3 model as FID, making NegLS excel in these metrics with its less natural recovered images.
**Q2: It seems like the authors implemented the defense by themselves, which should also be fine**
Yes, the authors of NegLS released their code after we implemented it. We have verified that our implementation adheres to the original training algorithm, and the detailed configurations can be found in Appendix C.6 in our paper.
**Q3: The impact of Trap-MID in defending in the high-resolution setting against PPA seems limited**
It is worth noting that with proper configuration, Trap-MID can still perform effectively against PPA. For example, as shown in Table 2 of our attached file, Trap-MID reduces PPA's attack accuracy to 11.78% on DenseNet-169 when $\beta=0.5$. However, we acknowledge that tuning hyper-parameters is necessary for optimal defense across different datasets and architectures.
Besides, we noticed that the success of hybrid defense like Trap-MID + NegLS is not merely owed to either of them but lies in their orthogonal strategies that (1) make it harder to extract private data and (2) provide misleading shortcuts. For instance, as shown in Table 2 of our paper and Table 4 of the attached file, this combination reduces LOMMA (KED-MI)'s attack accuracy to 42.47%, whereas either method alone achieves only 61.25% or 77.67%.
**Q4: While the runtime is still somewhat reasonable, it limits the approach to some extent**
We agree that computational efficiency is an important consideration, despite the effectiveness of Trap-MID. We will add this discussion to our Future Work section.
**Q5: Regarding the evaluation model, is the input size of the Face.evolve model really 224x224?**
We apologize for the mistake. The input size of the evaluation model is actually 112x112. The 64x64 images are indeed resized to fit the model's input size.
---
Rebuttal Comment 2.1:
Comment: Thank you for addressing my additional questions. Regarding Q1, that was my mistake; I misunderstood the experimental setting in the table when I first reviewed it. I have no further questions and will maintain my previous score.
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate your positive feedback and valuable comments! Your insights and suggestions are instrumental in refining our work. | Summary: The paper introduces a backdoor-based MI attack called Trap-MID. In this method, a trapdoor is integrated into the model to predict a specific label when the input is injected with the corresponding trigger. Consequently, this trapdoor information acts as a "shortcut" for MI attacks, causing them to extract trapdoor triggers instead of private data.
Strengths: The paper provides the new insights about Model Inversion from Backdoor injection perspective.
The idea is intuitive and theoretical analysis is provided.
Weaknesses: In line 137, “MI attacks leverage a discriminator to approximate generic prior and ensure natural outcomes” This statement might not generalize to all SOTA MI attacks those does not utilise a GAN discriminator such as PPA [1]
The evaluation is based on low-resolution (64x64) MI attacks, which are not very practical for modern models. I suggest authors to evaluate the effectiveness of Trap-MID and compare with baseline defenses on high-resolution setup such as PPA [1] or MIRROR [2]
The main results are on VGG-16, which is a quite out-dated architecture. Despite there are other results on Face.evoLVE and IR-152 in the Appx. I strongly encourage authors to evaluate on other architectures, such as in PPA [1]
The visualisation results are not very convincing to me. For example, in Fig. 2, most of the reconstructed images are very different from private data despite some defence baselines still have a high attack accuracy. I suggest reviewers to provide more visualisation for reference.
I understand that the visualisation could be subjective. I suggest the authors to conduct a comprehensive study to further confirm the effectiveness of Trap-MI when comparing with other defence baselines.
[1] Struppek, Lukas, et al. "Plug & play attacks: Towards robust and flexible model inversion attacks." ICML-2022.
[2] An, Shengwei et al. MIRROR: Model Inversion for Deep Learning Network with High Fidelity. Proceedings of the 29th Network and Distributed System Security Symposium.
Technical Quality: 2
Clarity: 3
Questions for Authors: I have a question about the sensitivity of Trap-MI to hyper-parameters. We know that BiDO sensitive to hyper-parameters as it need to optimise two hyper-parameters. From my understanding, Trap-MI also introduces two hyper-parameters. I wonder how sensitive is the Trap-MI to these two hyper-parameters?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and helpful comments. We address the specific weaknesses and questions raised in your review below:
**W1: “MI attacks leverage a discriminator to approximate generic prior and ensure natural outcomes” This statement might not generalize to MI attacks that don't utilise GAN discriminator, such as PPA**
We appreciate your feedback. This statement refers to the challenge of misleading MI attacks using trapdoors. Since MI attacks often rely on GAN to produce natural outcomes without explicit constraints like $l_2$ or $l_\infty$ distance in adversarial attacks, it becomes harder to design effective triggers.
PPA uses a pre-trained StyleGAN2 generator and optimizes latents during attacks. Although the discriminator isn't involved in the process, it guides the generator to produce realistic images during GAN training, which indirectly ensures natural outcomes. We will refine this discussion to make it clearer.
**W2: The evaluation is based on low-resolution (64x64) MI attacks, which are not very practical for modern models. The main results are on VGG-16, which is quite outdated.**
To address this, we have added experiments against PPA to demonstrate a high-resolution (224x224) scenario with modern models. Due to time and computational constraints, we modified the attack to optimize 20 samples and select 5 in the final stage (PPA originally optimized 200 samples and selected 50, which would take about 10 days on our machine).
The following table shows the defense performance on DenseNet-169. Although Trap-MID doesn't fully mitigate PPA with default settings, it preserves privacy to a certain degree without an accuracy drop. Besides, increasing trapdoor loss weight further improves defense performance, and combining Trap-MID with NegLS (keeping $\beta$=0.2) even reduces attack accuracy to 2%. This result demonstrates Trap-MID's effectiveness in privacy protection under different scenarios. We discuss more about the defense combination in Q3 of Author Rebuttal.
|Defense|Acc ↑|AA-1 ↓|AA-5 ↓|$\delta_{face}$ ↑|$\delta_{eval}$ ↑|FID ↑|
|-|-|-|-|-|-|-|
|-|87.08|92.00|98.28|0.7240|140.93|37.76|
|Trap-MID|89.98|59.48|72.22|0.9468|182.90|49.64|
|$\beta$=0.5|83.39|11.78|19.24|1.3985|269.99|63.04|
|w/ NegLS|83.56|**1.60**|**4.90**|**1.4744**|**279.68**|**79.37**|
Detailed results, including other architectures, can be found in Table 2 of the attached file in Author Rebuttal We regret that we cannot conduct hyper-parameter tuning on other models due to time constraints. We will add this analysis to our paper.
**W3: The visualization results are not very convincing to me. I suggest the authors provide more visualization for reference and conduct a comprehensive study to further confirm the effectiveness of Trap-MI.**
We acknowledge that visualization can be subjective due to the ambiguous definition of "class representative." However, in Figure 2 of our paper, we found that recovered images from Trap-MID show more distinct attributes from private data. For example, Identity 1 images have different skin tones, Identity 2 and 5 images have different hairstyles, and Identity 4 images show different genders. We also provide more recovered samples in Figure 1 of the attached file in Author Rebuttal, in which Identity 4 images from Trap-MID have different hair colors from private data compared to other defenses.
For the quantitative analysis, our PPA experiments in W2 include additional evaluation metrics, such as FaceNet's feature distance used in [1], as well as improved precision, recall, density, and coverage used in [2]. We also conduct the same evaluation on our PLG-MI results. These results, found in Tables 2 and 3 of the attached file Auther Rebuttal, provide a more comprehensive analysis of Trap-MID. We will add this analysis to our paper.
**Q1: How sensitive is the Trap-MID to the hyper-parameters?**
We discussed the impact of blend ratio in Appendix D.3 and trapdoor loss weight in Appendix D.4. According to Tables 7 and 8 in our paper, Trap-MID is not very sensitive to these hyper-parameters regarding test accuracy, maintaining about 81-84% accuracy under different configurations.
For defense performance, Table 7 in our paper shows that a lower blend ratio generally provides better defense. There is an abrupt drop in attack accuracy against KED-MI and PLG-MI (e.g., PLG-MI's attack accuracy drops from 93.86% to 1.92% when the blend ratio decreases from 0.05 to 0.03), indicating that an invisible enough trigger is essential to mislead certain attacks.
Additionally, Table 8 in our paper shows that increasing trapdoor loss weight from 0.02 to 0.2 reduces PLG-MI's attack accuracy from 23.84% to 6.23%.
[1] Struppek et al., Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. ICML 2022
[2] Wang et al., Variational model inversion attacks. NeurIPS 2021
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer BGpu
Comment: I appreciate the authors' efforts in providing a thorough rebuttal and conducting additional experiments.
Some of my concerns have been well addressed. However, a few issues remain:
- The effectiveness of Trap-MID in defending against more practical MI setups (e.g., PPA attacks, high-resolution) is still limited.
- My concerns regarding the presented visualization results still remains. While I appreciate the authors' effort in presenting additional results on other metrics and highlighting some examples, I noticed some inconsistencies between the visualization results (in the attached file and Appendix) and the quantitative results. For example, my observation (of course it is subjective) is that the visualizations suggest that Trap-MID and NegLS perform similarly in defending against PLG-MI (indeed Trap-MID could be slightly better). However, the quantitative results indicate that Trap-MID outperforms NegLS by over 80%.
- I also agree with Reviewer P6SC that runtime is a critical aspect of Trap-MID that should be addressed to further strengthen the method.
Besides, I have a few questions regarding the evaluation models. Could you confirm whether the exact evaluation models used in your experiments are the same as those from existing works, which are publicly available and widely used in MI research? Also, could you clarify the input resolution of these evaluation models for both high-resolution and low-resolution MI setups?
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and the time you've taken to provide valuable feedback. We address your remaining concerns below:
**Q1: The effectiveness of Trap-MID in defending against more practical MI setups (e.g., PPA attacks, high-resolution) is still limited**
While Trap-MID's defense performance against PPA may be limited with the default setup, it is worth noting that it can be effective with proper configuration. For instance, Table 2 in our attached file shows that Trap-MID reduces PPA's attack accuracy to 11.78% on DenseNet-169 when $\beta=0.5$. However, we acknowledge that tuning hyper-parameters is essential for optimal defense across different datasets and model architectures.
**Q2: My observation is that the visualizations suggest that Trap-MID and NegLS perform similarly in defending against PLG-MI. However, the quantitative results indicate that Trap-MID outperforms NegLS by over 80%.**
We understand your concern regarding the perceived inconsistencies between the visualizations and attack accuracy. We found that metrics based on the evaluation model, such as attack accuracy and KNN distance, estimate the extracted attributes, while FID better identifies unnatural or out-of-distribution outcomes. Successful attacks should perform well across all these metrics, recovering realistic images with private attributes.
For instance, while MID, BiDO, and NegLS suffer from high attack accuracy (89-93%) against PLG-MI, NegLS's higher FID (69 vs. 14-17) suggests more unnatural recovered samples, indicating its better defense performance. Therefore, Trap-MID does not outperform these approaches to the same degree.
Trap-MID’s lower attack accuracy and higher KNN distance imply that its recovered images reveal fewer private attributes than other methods (as discussed in our previous rebuttal). However, Trap-MID's slightly lower FID suggests that the recovered images from NegLS are slightly more unnatural (58 vs. 69), which might explain the similar visualization results observed between NegLS and Trap-MID.
Further evaluations of PLG-MI's results are provided in Table 3 of our attached file:
|Defense|Acc ↑|$\delta_{face}$ ↑|Precision ↓|Recall ↓|Density ↓|Coverage ↓|
|-|-|-|-|-|-|-|
|-|87.83|0.6110|19.39|13.17|0.0893|0.1498|
|MID|76.67|0.6410|21.25|33.96|0.0913|0.1734|
|BiDO|79.62|0.7058|20.17|10.17|0.0807|0.1367|
|NegLS|81.76|0.7587|**3.80**|**0.00**|**0.0244**|**0.0189**|
|Trap-MID|81.62|**1.3845**|9.56|71.63|0.0328|0.0728|
$\delta_{face}$ measures the feature distance between recovered and private data using FaceNet pre-trained on VGGFace2, where the highest value for Trap-MID indicates fewer extracted facial attributes. In contrast, other metrics measured by InceptionV3 (same as FID) suggest the more unnatural outcomes produced from NegLS. Trap-MID ranks second to NegLS in most metrics, while its ability to classify arbitrary images with injected triggers as corresponding classes results in more diverse recovered images, leading to a broader manifold of recovered data and higher recall.
**Q3: I also agree with Reviewer P6SC that runtime is a critical aspect of Trap-MID that should be addressed to further strengthen the method.**
We agree that computational efficiency is an important aspect, despite the effectiveness of Trap-MID. We will include this consideration in our Future Work section.
**Q4: Could you confirm whether the exact evaluation models used in your experiments are the same as those from existing works? Also, could you clarify the input resolution of these evaluation models for both high-resolution and low-resolution MI setups?**
We apologize for the previous mistake regarding the input resolution in our rebuttal to Reviewer bcsy. The correct input size of the evaluation model in the low-resolution setting is 112x112, not 224x224 as previously mentioned. Below are the details about data resolution and the evaluation models used:
- **Low-resolution setups (against GMI, KED-MI, LOMMA, PLG-MI, BREP-MI)**
- Target model input resolution: 64x64
- Adversary's generator output resolution: 64x64 (resized to fit target/evaluation models' input sizes)
- Evaluation model input resolution: 112x112
- We used the publicly available checkpoint provided in PLG-MI's official GitHub repository for the evaluation model, identical to that in previous works.
- **High-resolution setups (against PPA)**
- Target model input resolution: 224x224
- Adversary's generator output resolution: 1024x1024 (center cropped to 800x800 and resized to fit target/evaluation models' input sizes)
- Evaluation model input resolution: 299x299
- Since the PPA's authors have not released their evaluation model checkpoint, we reproduced it using their official code and configuration file. While the models may not be identical, they were trained with the same setup. Our evaluation model's test accuracy is 91.34%, compared to 93.28% reported in their paper. | Summary: The paper proposes Trap-MID, a trapdoor-based defense mechanism to protect deep neural networks (DNNs) against Model Inversion (MI) attacks. The technique involves integrating trapdoors into the model to mislead MI attacks, causing them to extract trapdoor triggers rather than private data. The authors provide theoretical insights into the effectiveness and invisibility of these trapdoors and validate their approach through empirical experiments, demonstrating superior performance against various MI attacks without the need for extra data or significant computational overhead.
Strengths: - Trap-MID presents the use of trapdoors to mislead MI attacks, filling a gap in the existing defense strategies.
- The empirical defense results seem good.
Weaknesses: - The proposed method involves multiple optimization processes, making it computationally expensive and potentially impractical for large-scale or resource-constrained applications.
- According to Theorem 1, a more effective (larger $\delta$) and invisible (smaller $\epsilon$) trapdoor can lead to a larger lower bound to the expected posterior probability, making it more likely to be extracted by an MI attack. However, the paper does not analyze the discrepancy between the two posterior probability distributions, which is crucial for understanding the practical implications of the defense mechanism.
- The paper's organization and logical flow can be further improved for better readability and comprehension. The transitions between sections are sometimes abrupt, and a more cohesive structure would enhance the overall presentation.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We address the specific weaknesses raised in your review below:
**W1: The proposed method involves multiple optimization processes, making it computationally expensive and potentially impractical for large-scale or resource-constrained applications.**
We appreciate your concern. Here are the training times of different defense methods in our experiments:
|Defense Method|Training Time|
|-|-|
| Unprotected model | 15 mins |
| MID | 15 mins |
| BiDO | 16 mins |
| NegLS | 35 mins |
| Trap-MID | 1 hour 15 mins |
While Trap-MID does take the longest time due to three gradient updates per epoch (discriminator, triggers, and target model), it is worth noting that it also significantly surpasses other defenses against recent MI attacks. Moreover, Trap-MID still requires less data and computational cost than the existing trap-based defense, NetGuard, which demands an additional dataset, training an extra classifier, and conducting shadow attacks.
We believe that it would be a valuable future direction to develop a more efficient trigger generation to make Trap-MID more practical for large-scale applications (e.g., pre-computing triggers with fewer steps to reduce overhead during model training).
**W2: The paper does not analyze the discrepancy between the two posterior probability distributions, which is crucial for understanding the practical implications of the defense mechanism.**
Theorem 1 establishes a lower bound for the posterior probability of poisoned data compared to benign data:
$$\mathbb{E}\_{Y \sim p(Y)} \mathbb{E}\_{X \sim p(X)}[\log p_f(\Pi_y(X)|Y)] \ge \mathbb{E}\_{(X, Y) \sim p(X, Y)}[\log p_f(X|Y)] + (\delta - \epsilon)$$
For example, since the unprotected model isn't injected with a trapdoor, it would have a negative trapdoor effectiveness $\delta$, leading to a lower expected posterior probability of poisoned data $p_f(\Pi_y(X)|y)$ than benign data $p_f(X|y)$ and making MI attacks more likely to extract private data.
In contrast, a trapdoored model with a stronger predictive power on invisibly triggered data (especially when $\delta > \epsilon$) would result in a higher expected posterior probability for poisoned data compared to benign data, misleading MI attacks to recover triggered data instead.
**W3: The paper's organization and logical flow can be further improved for better readability and comprehension.**
Thank you for highlighting this issue. We will refine our explanations and improve the organization for better readability. We would greatly appreciate any specific feedback on the sections that need improvement to help us enhance our presentation and address any confusion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your effort in providing a thorough response, I decided to slightly increase my score. After reading all the reviews and the corresponding rebuttals, I believe that the paper still has significant room for improvement, and I encourage the authors to take all the comments into account to further refine and enhance the manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback! We will carefully review all the comments and suggestions to further refine and enhance our paper. | Summary: This paper presents Trap-MID, a novel defense mechanism against model inversion attacks that utilizes trapdoor injection techniques. By incorporating a trapdoor into the model, Trap-MID misleads MI attacks into extracting trapdoor information instead of private data, effectively preserving privacy. The paper contributes to the field by:
1. Introducing a Novel Defense Mechanism: Trap-MID pioneers the exploration of the relationship between trapdoor injection and MI defense, providing a new approach to tackle this challenging privacy problem.
2. Theoretical Insights and Empirical Validation: The paper provides theoretical analysis on the impact of trapdoor effectiveness and visibility on deceiving MI attacks and validates its effectiveness through extensive experiments, showcasing its superior performance compared to existing defenses.
Strengths: + New Application of Trapdoors.The paper creatively applies trapdoor injection techniques, traditionally used for adversarial detection, to defend against model inversion attacks. This novel application opens up new avenues for privacy-preserving DNNs.
+ Theoretical Analysis: The paper presents a solid theoretical foundation for trapdoor-based defenses, establishing definitions for trapdoor effectiveness and visibility and providing a theorem that explains their impact on MI attacks.
+ Comprehensive Experiments: The authors conduct thorough experiments on various MI attacks using different DNN architectures and datasets, demonstrating the generalizability and robustness of Trap-MID.
+ Well-structured and Clear Explanation: The paper is well-structured and provides clear explanations of the proposed method, theoretical analysis, and experimental results. The figures and tables effectively illustrate the key concepts and findings.
Weaknesses: - Assumption of Trust: Trap-MID assumes a level of trust between data providers and the model owner, which might not always be feasible in practice. Future work could explore ways to empower data providers or individuals to secure their sensitive information before sharing data.
- Limited Scope of Attackers: The paper assumes white-box attackers with full access to the model. However, in practical scenarios, attackers may only have access to the model's predictions or even just the labels. It would be beneficial to explore the effectiveness of Trap-MID against black-box and label-only attackers.
- Vulnerability to KD: Trap-MID's efficacy may be limited against MI attacks involving knowledge distillation (KD), as KD enables the attacker to extract trapdoor information and explore private data simultaneously. Developing more robust trapdoors against KD is a crucial direction for future research.
- Robustness to Adaptive Attacks: The paper explores the effectiveness of Trap-MID against adaptive attacks where the attacker knows the existence of trapdoors. However, it would be valuable to investigate the robustness of Trap-MID against other types of adaptive attacks, such as those that exploit specific vulnerabilities in the trapdoor design.
- Random Trigger Initialization: The randomly initialized triggers introduce variability in defense performance, leading to a larger standard deviation compared to previous defenses. Exploring more stable and powerful trigger designs would enhance the consistency and reliability of Trap-MID.
- Limited Exploration of Trigger Design: While the paper demonstrates the effectiveness of a simple trigger design, exploring more sophisticated trigger designs could potentially improve the defense performance further.
- Limited Analysis of Other Reconstruction-based Attacks: The paper primarily focuses on model inversion attacks. Analyzing the effectiveness of Trap-MID against other reconstruction-based attacks, such as Gradient Inversion Attacks and Embedding Inversion Attacks, would provide a more comprehensive evaluation of its generalizability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Impact of KD on Defense Performance: Could the authors provide more insights into the impact of KD on Trap-MID's defense performance? Specifically, what modifications could be made to the trapdoor design or training process to make it more robust against KD-based attacks?
2. Stability of Trigger Design: How sensitive is Trap-MID's performance to the specific trigger design? Are there any guidelines or heuristics for choosing an effective and stable trigger design?
3. Trade-off between Privacy and Utility: The paper mentions an accuracy-privacy trade-off due to the trapdoor loss weight. Could the authors elaborate on this trade-off and provide more insights into the impact of different weight values on privacy and utility?
4. Impact of Distributional Shifts: The paper briefly mentions distributional shifts in the auxiliary dataset. Could the authors provide a more detailed analysis of how distributional shifts impact Trap-MID's defense performance and how it compares to existing defenses in such scenarios?
5. Comparison with Other Reconstruction-based Attacks: The paper focuses on model inversion attacks. How effective is Trap-MID against other reconstruction-based attacks, such as Gradient Inversion Attacks and Embedding Inversion Attacks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors acknowledge the limitations of their work, including the assumption of trust and vulnerability to KD. They also recognize the need for further exploration of trigger designs and analysis of other reconstruction-based attacks. These limitations are clearly stated
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful review. We address the specific weaknesses and questions raised in your comments below:
**W1: Trap-MID assumes trust between data providers and model owners, which isn't always feasible.**
This limitation is discussed in Appendix A.2. We follow a common scenario that relies on model owners to protect private data. Since backdoor attacks can be conducted by poisoning the dataset, we believe Trap-MID's success can inspire future work enabling data owners to preserve privacy.
**W2: It would be beneficial to explore the effectiveness of Trap-MID against black-box and label-only attacks.**
To address this, we conduct experiments against BREP-MI, a label-only attack. Our result shows that BREP-MI is ineffective on Trap-MID, requiring over 820,000 iterations to initialize latents for all identities and getting 0% attack accuracy in untargeted attacks (recovering 300 identities), demonstrating Trap-MID's efficacy and generalizability:
|Defense|Accuracy ↑|# of Initial Iterations ↑|Attack Accuracy ↓|
|-|-|-|-|
|Trap-MID|81.62|171|0.00|
Results against other defenses can be found in Table 1 of the attached file in Author Rebuttal. We will add this analysis to our paper.
**W3 & Q1: Could the authors provide more insights into KD's impact on Trap-MID and how to make it more robust against KD-based attacks?**
KD reduces Trap-MID's efficacy as student models typically do not learn trapdoor behaviors. Conducting shadow KD during trapdoor injection could be one of the solutions [1]. Besides, we found that combining Trap-MID with NegLS also improves defense, even against KD-based attacks:
|Attack|Defense|Accuracy ↑|AA-1 ↓|AA-5 ↓|KNN Dist ↑|FID ↑|
|-|-|-|-|-|-|-|
|LOMMA (KED-MI)|Trap-MID|81.37|61.25|85.76|1404.77|24.19|
||w/ NegLS|77.10|**42.47**|**70.64**|**1521.82**|**37.22**|
Intuitively, NegLS makes it harder to extract private data and therefore makes trapdoor more attractive. This suggests Trap-MID to be an orthogonal work to existing defenses, and a hybrid approach may enhance robustness against specific adaptive attacks.
Results against other attacks are available in Table 4 of the attached file. We will add this analysis to our paper.
**W4: It would be valuable to investigate Trap-MID's robustness against other types of adaptive attacks.**
Thank you for the suggestions. Due to time constraints, we couldn't explore this further in this work. We recognize the importance of this future direction to help improve the trapdoor design.
**W5: The randomly initialized triggers introduce variability in defense performance.**
We acknowledged this limitation in Appendix A.2. However, the worst-case performance of Trap-MID still surpasses the best-case performance of existing defense against most attacks, indicating its effectiveness:
|Attack|Defense|Acc ↑|AA-1 ↓|AA-5 ↓|KNN Dist ↑|FID ↑|
|-|-|-|-|-|-|-|
|GMI|BiDO (best)|78.32|4.42|12.94|2036.78|47.55|
||Trap-MID (worst)|79.39|**0.56**|**2.46**|**2280.19**|**75.16**|
|KED-MI|NegLS (best)|81.79|29.64|57.28|1544.90|**47.31**|
||Trap-MID (worst)|81.55|**23.80**|**46.58**|**1665.86**|21.23|
|LOMMA (GMI)|NegLS (best)|81.79|48.58|75.80|1423.78|**38.27**|
||Trap-MID (worst)|81.55|**44.80**|**72.60**|**1535.01**|35.47|
|LOMMA (KED-MI)|MID (best)|76.67|**59.18**|**86.30**|**1413.53**|**24.55**|
||Trap-MID (worst)|81.55|69.32|90.90|1333.36|20.95|
|PLG-MI|NegLS (best)|81.79|83.84|96.58|1495.23|**73.45**|
||Trap-MID (worst)|79.39|**15.72**|**30.98**|**1843.42**|36.91|
**W6: Exploring more sophisticated trigger designs could improve the defense further.**
This future direction is discussed in Appendix A.2. Our work serves as the first step in trapdoor-based defense. Besides, Appendix D.1 compares Trap-MID with patch-based triggers, showing the importance of trigger design. Future research could explore advanced triggers to improve defense further.
**W7 & Q5: How effective is Trap-MID against other reconstruction-based attacks?**
We discuss this future work in Appendix A.2. As these reconstruction attacks also optimize inputs to satisfy the adversary's objectives, we believe that Trap-MID can be extended to mitigate them with proper "shortcuts." However, since this work focuses on MI defense, the extension is beyond the scope, and we leave it to future research.
**Q2: How sensitive is Trap-MID to the specific trigger design? Are there any guidelines for choosing a stable trigger design?**
Appendix D.1 compares Trap-MID with patch-based triggers, showing the importance of trigger design. In this work, we use trapdoor loss and discriminator loss to optimize the triggers, which already surpassed existing defenses. We leave further customization to future work (e.g., improving trigger initialization and optimization, or using recent backdoor techniques).
**Q3: Could the authors elaborate on the accuracy-privacy trade-off from trapdoor loss weight?**
We discussed this trade-off in Appendix D.4. A larger weight encourages the model to learn trapdoor behavior instead of the main task, enhancing defense at the cost of accuracy. Table 8 shows that increasing this weight from 0.02 to 0.2 reduces PLG-MI's attack accuracy from 24% to 6%, while the test accuracy decreases from 84% to 81%.
**Q4: Could the authors provide a more detailed analysis of how distributional shifts impact Trap-MID's defense performance and how it compares to existing defenses in such scenarios?**
Section 4.2 and Appendix E.2 analyze this impact. Distributional shifts in auxiliary data degrade attack performance, especially against Trap-MID. In particular, distributional shifts make extracting private data harder, which therefore makes trapdoors more attractive to the attacks. Tables 4 and 11 show that PLG-MI's attack accuracy drops from 6% to nearly 0% with distributional shifts, highlighting its effectiveness in this case.
[1] Ge et al., Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation, ACM MM 2021
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply. The rebuttal basically solved my questions, so I will slightly increase the score based on the original rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reconsideration! We are glad to hear that our responses have successfully addressed your concerns. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback that helped us improve our paper. We are encouraged that they found Trap-MID to be a **novel** (MuQ4, bcsy), **clever, and conceptually straightforward** (bcsy) approach to defending against MI attacks. The feedback that our work **fills a gap in existing defenses** (P6SC), **provides new insights** (BGpu), and **opens up avenues for privacy-preserving DNNs** (MuQ4) is greatly appreciated. We are pleased that the experimental results (bcsy), defense performance (MuQ4, P6SC, bcsy), and efficiency of Trap-MID were well-received (P6SC). We are also glad that our paper was found to be **well-structured** (MuQ4) and **well-written** (bcsy), with an **intuitive idea** (BGpu), **solid theoretical foundation** (MuQ4), **comprehensive experiments** (MuQ4), and **well-defined evaluation** (bcsy).
We address some common points below. Detailed responses to other questions are in the reviewer-specific rebuttals. Additional tables and figures referenced in our responses are in the attached PDF.
**Q1: Defense performance against black-box/label-only attacks (MuQ4, bcsy)**
We add the experiments against BREP-MI, a label-only attack. BREP-MI failed to initialize latents for all 1,000 identities in a reasonable time against Trap-MID, requiring over 820,000 iterations to sample latents for only 942 of them. This demonstrates Trap-MID's effective privacy protection.
We also conducted untargeted attacks to recover 300 identities. Trap-MID significantly increased the number of initial iterations required and reduced attack accuracy to 0%:
|Defense|Accuracy ↑|# of Initial Iterations ↑|Attack Accuracy ↓|
|-|-|-|-|
|-|87.83|2|65.00|
|MID|76.67|2|46.33|
|BiDO|79.62|3|39.00|
|NegLS|81.76|3|52.00|
|Trap-MID|81.62|**171**|**0.00**|
The results can also be found in Table 1 of the attached file. We will add this analysis to our paper.
**Q2: Defense performance under high-resolution scenarios with modern architectures (BGpu, bcsy)**
We add the experiments against PPA to demonstrate a high-resolution (224x224) scenario with modern models. Due to time and computational constraints, we modified the attack to optimize 20 samples and select 5 in the final stage (PPA originally optimized 200 samples and selected 50, which would take about 10 days on our machine).
Although Trap-MID does not fully mitigate PPA with default settings, it preserves privacy to certain degrees without an accuracy drop. Besides, increasing trapdoor loss weight or combining Trap-MID with NegLS further improves defense performance, reducing attack accuracy to 2%, which shows Trap-MID's effectiveness under different scenarios. We discuss more about the defense combination in Q4.
|Defense|Acc ↑|AA-1 ↓|AA-5 ↓|$\delta_{face}$ ↑|$\delta_{eval}$ ↑|FID ↑|
|-|-|-|-|-|-|-|
|-|87.08|92.00|98.28|0.7240|140.93|37.76|
|Trap-MID|89.98|59.48|72.22|0.9468|182.90|49.64|
|$\beta$=0.5|83.39|11.78|19.24|1.3985|269.99|63.04|
|w/ NegLS|83.56|**1.60**|**4.90**|**1.4744**|**279.68**|**79.37**|
Detailed results, including other architectures, are available in Table 2 of the attached file. We will add this analysis to our paper.
**Q3: Evaluation metrics (BGpu, bcsy)**
We agree that the visualization can be subjective due to the ambiguous definition of "class representative." However, in Figure 2 of our paper, we found that recovered images from Trap-MID show more different attributes from private data. For example, those of Identity 1 show different skin tones, those of Identity 2 and 5 show different hairstyles, and those of Identity 4 show different genders. We also provide more recovered samples in Figure 1 of the attached file. Similarly, the recovered images of Identity 4 from Trap-MID show different hair colors from private data compared with other defenses.
For the quantitative analysis, we found that attack accuracy/KNN distance and FID can complement each other. Specifically, the formers estimate the extracted attributes of target identity, while the latter measures the naturalness and styling similarity of recovered images. However, we agree that developing a universal metric capturing all these properties would be valuable for straightforward comparisons between MI attacks/defenses.
Our PPA experiments in Q2 include additional metrics such as FaceNet's feature distance used in [1], as well as improved precision, recall, density, and coverage used in [2]. Also, we use them to evaluate PLG-MI's results to provide a more comprehensive analysis of Trap-MID. The results are available in Tables 2 and 3 of the attached file. We will add this analysis to our paper.
**Q4: Further design of adaptive attacks. How to make Trap-MID more robust to mitigate them? (MuQ4, bcsy)**
We acknowledge the need for exploring adaptive attacks further. Our work is a first step in understanding the relationship between trapdoors and MI defenses. Future research could develop advanced adaptive attacks to investigate and improve the trapdoor design further.
In addition, we found that combining Trap-MID and NegLS improves defense performance, even against KD-based attacks, suggesting Trap-MID to be an orthogonal work to existing defenses and can be incorporated with them. Intuitively, previous approaches focus on reducing information leakage, which makes it harder to extract private data, and therefore makes our shortcut more attractive. Another future direction is to combine multiple defense methods to counter specific adaptive attacks.
|Attack|Defense|Accuracy ↑|AA-1 ↓|AA-5 ↓|KNN Dist ↑|FID ↑|
|-|-|-|-|-|-|-|
|LOMMA (KED-MI)|Trap-MID|81.37|61.25|85.76|1404.77|24.19|
||w/ NegLS|77.10|**42.47**|**70.64**|**1521.82**|**37.22**|
Results against other attacks can be found in Table 4 of the attached file. We will add this discussion and analysis to our paper.
[1] Struppek et al., Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. ICML 2022
[2] Wang et al., Variational model inversion attacks. NeurIPS 2021
Pdf: /pdf/feb9b70c80078fec61216e001f058f63de39e881.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models | Accept (poster) | Summary: This paper proposes visual sketchpad that aims to aid the existing multimodal language models (MLMs). Specifically, the visual sketchpad serves as a task-specific prompting technique that calls tools to draw sketches for the input problem as additional context to help solve the problem. This prompt technique is like a chain of thought but in a visual way, and the tools include code generation as image drawer as well as vision specialist models. Benchmarks show that the proposed visual sketchpad frameworks can significantly boost the performance of base models (e.g., GPT-4o and LLAVA-Next) on multiple math and vision tasks.
Strengths: 1. This paper proposes an interesting and effective framework (i.e., visual sketchpad) to assist the MLMs through chain of thought. Compared with text-only chain-of-thought prompts with external tools, the proposed visual sketchpad further expands the thought generation in a visual fashion, which is particularly suitable for certain scenarios like the geometry-based reasoning for math problems.
2. The authors design multiple tools that enables visual sketchpad for various tasks: code generation to draw diagram/figure for math problems, and visual specialists (including depth generation / sliding window / zoom-in / crop / overlay images) for visual reasoning tasks. The rich tools provide multiple combinations to analyze the input problem and generate suitable sketches for problem solving.
3. The proposed visual sketchpad does not need any training or finetuning, making it easy to integrate with existing MLMs. In the evalution, the GPT-4o / GPT-4 Turbo w/ visual sketchpad got significant improvement on geometry, graph, math, and game problems. In addition, the sketchpad also brings decent performance gain and achieves SOTA on several complex visual reasoning tasks.
4. The paper is well written and easy to comprehend.
Weaknesses: 1. As the proposed visual sketchpad attempts to generate code to draw the images. I wonder if it is possible that sometimes code has issues and cannot generate images properly. If so, how to deal with such issues.
2. For the geometry problem, can the visual sketchpad support geometry image (e.g., in jpg) as input? If so, how does the visual sketchpad overlay lines on the image?
3. Currently, the visual sketchpad is a set of different prompts that tackle specific problems with a corresponding set of tools. Can the visual sketchpad be a single prompt to call tools and tackle all the problems listed in the evaluation? If so, that would be useful as the prompt can be integrated into the system prompt for MLMs as an enhanced version.
4. The evaluation misses an important set of comparisions: There is no comparison between the proposed approach and the existing chain-of-thought frameworks.
5. Although most benchmarks in the evaluation show that the additional context from visual sketchpad brings benefits, I wonder if there are some cases that the additional context introduces noises and causes errors compared with the baseline model.
6. Typos
Line 64: jigwaw -> jigsaw
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses section for the questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the author discusses some limitations in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and constructive review! We are honored that you believe visual sketchpad is interesting and effective. We address each question as follows. Hope that our response clarified your concerns, and we would be grateful if you could consider improving the rating after seeing our responses!
**1. As the proposed visual sketchpad attempts to generate code to draw the images. I wonder if it is possible that sometimes code has issues and cannot generate images properly. If so, how to deal with such issues.**
**Answer:** We thank the reviewer for pointing this out. Sometimes, LMs generate incorrect code that results in execution errors. Our framework addresses this problem by feeding the error message back to the model and prompting it to generate a revised version of the code. This idea was developed by a long line of prior work, for example, self-debug [1] and AutoGen[51]. We will add more discussions in the final version.
[1] Chen, Xinyun, Maxwell Lin, Nathanael Schärli, and Denny Zhou. "Teaching large language models to self-debug." arXiv preprint arXiv:2304.05128 (2023).
**2. For the geometry problem, can the visual sketchpad support geometry image (e.g., in jpg) as input? If so, how does the visual sketchpad overlay lines on the image?**
**Answer:** This is a great question. We experimented with many multimodal LLMs (e.g., GPT-4o) and diffusion-based image editing models (e.g., SDXL-edit), but found that these models were unable to accurately add the required line to the geometry image. As a result, our current framework does not support using only the geometry image as input. Instead, we provide the matplotlib code for the geometry diagrams, enabling the LLM to add a line programmatically. Future work would be to develop or fine-tune an image-editing model specifically for overlaying precise lines on geometric images. We appreciate your question and will include this discussion in our future work section.
**3. Currently, the visual sketchpad is a set of different prompts that tackle specific problems with a corresponding set of tools. Can the visual sketchpad be a single prompt to call tools and tackle all the problems listed in the evaluation? If so, that would be useful as the prompt can be integrated into the system prompt for MLMs as an enhanced version.**
**Answer:** Thanks for your suggestion! Visual sketchpad uses a single unified prompt for all vision tasks and similarly, a single unified prompt for all math tasks. These prompts can be directly integrated into the system prompt of any MLMs.
**4. The evaluation misses an important set of comparisions: There is no comparison between the proposed approach and the existing chain-of-thought frameworks.**
**Answer:** Thanks for your feedback. We'd like to clarify that our GPT-4-turbo and GPT-4 baselines already incorporate chain-of-thought reasoning. These models produce multiple reasoning steps before outputting a final answer. It is widely believed that OpenAI trains their models to output chain-of-thought reasoning during the instruction-tuning and alignment processes. Here is a sample output from the baseline methods. As you can see, the baseline already employs a chain-of-thought approach:
**Query:** You are given an adjacency matrix of a graph and two query nodes. \n\nYOUR TASK is to find if there is a path between the two nodes.\n\n Example:\nAdjacency Matrix: [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]....
**Baseline GPT-4 output:** The adjacency matrix provided represents an undirected graph where each row and column corresponds to a node, and the value at a given position (i, j) indicates whether there is a direct edge between node i and node j. A value of 1 represents an edge, and a value of 0 represents no edge.\n\nFrom the adjacency matrix, we can see that most nodes have no edges connecting them to any other nodes, except for nodes 8 and 10 which are connected to each other (as indicated by the 1s at positions (8, 10) and (10, 8)).\n\nThe query is asking whether there is a path between node 9 and node 4. Looking at the adjacency matrix:\n- Node 9 has no edges connecting it to any other nodes (the entire row and column corresponding to node 9 are zeros).\n- Similarly, node 4 has no edges connecting it to any other nodes (the entire row and column corresponding to node 4 are zeros).\n\nSince neither node 9 nor node 4 has any connections to other nodes, and there are no intermediate nodes that connect these two nodes, there is no path between node 9 and node 4.\n\nFINAL ANSWER: no\n
**5. Although most benchmarks in the evaluation show that the additional context from visual sketchpad brings benefits, I wonder if there are some cases that the additional context introduces noises and causes errors compared with the baseline model.**
**Answer:** This is a good point. Yes. There are instances where the additional context from the visual sketchpad can introduce errors. For example, the vision expert in our framework such as GroundingDINO, may wrongly annotate or miss bounding boxes. We found that GPT-4o is really good at figuring out if these vision experts are making mistakes, and correcting them during reasoning, as we write in L 288-289. But there are times where GPT-4o is still misled by these noises. We appreciate your feedback and will include this point in the limitations section of the final version.
**6. Typos Line 64: jigwaw -> jigsaw**
**Answer:** Thanks for pointing out the typo. We will fix it in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your time and effort in reviewing our paper! If you have any additional questions for discussion, we would be more than happy to address them. We will make every effort to revise our paper based on the reviewer's feedback and suggestions. | Summary: This work proposes Visual SKETCHPAD, a framework designed to incorporate visual reasoning into chain-of-thought and tool-use paradigms. Specifically, a multimodal LLM (Large Language Model) addresses a query by (1) generating a plan, (2) executing an action, (3) updating the current context with the result of the action, and (4) iterating until sufficient information is gathered to answer the query.
The plan can include various types of visual reasoning, such as visual program generation via matplotlib and networkx, leveraging specialist vision models like object detectors, segmentation models, and depth estimation models. It also includes the use of specialized tools such as image overlay or a chess drawing Python library. The main contribution of Visual SKETCHPAD lies in the reuse of intermediate visual outputs as additional observations, which enable effective chain-of-thought reasoning. SKETCHPAD demonstrates significant performance improvements compared to text-only chain-of-thought LLMs across multiple tasks.
Strengths: - *Originality:* Incorporation of visual intermediate steps is a natural extension of chain-of-thought reasoning for multimodal LLMs.
- *Quality:* SKETCHPAD is a generalized framework for extending chain-of-thourgh reasoning with visual artefacts produced by vision models, program execution, etc. A comprehensive evaluation on existing closed-source and open-source LLMs is provided. Authors additionally conduct a human study on the discrepancy between LLM and human made plans.
- *Clarity:* The manuscript is of high quality and easy to follow and the proposed idea is simple to grasp. Authors include prompt examples on supp that clarify the practical form of the proposed framework
- *Significance:* Authors demonstrate a strong performance increase for LLMs prompted via SKETCHPAD compared to text-only chain-of-thought reasoning. The proposed framework does not require any training or tuning and existing LLMs can directly be used with SKETCHPAD style prompts to solve problems that require the described type of visual reasoning.
Weaknesses: - *Clarity:* The reviewer found the title and general positioning of this work slightly misleading. Visual SKETCHPAD naturally would refer to chain of thought framework with that includes generalised sketching capabilities. Troughout this work sketching soley refers to drawing of auxiliary lines for geometric problems and is only one of the multiple tools that the proposed framework makes use of for visual artefact generation.
- *Originality:* The main concern of the reviewer is with the extent of the novelty of this work. The single differentiating factor compared to standard pipelines is the incorporation of visual artefacts on intermediate CoT steps. Incoorporation of visual reasoning information for LLM reasoning cannot be considered a purely novel proposition (for example [a] or [60]). Even though the proposed approach departs from these methods by iteratively updating the context with newly formed visual artefacts, the reviewer is still skeptical w.r.t the extent of the contribution for NeurIPS standards.
[a] See, Think, Confirm: Interactive Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning. Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, Chuang Gan
Technical Quality: 4
Clarity: 3
Questions for Authors: - Could the authors elaborate further on the extent of their contribution and how their work is different from [a] and [60]?
- Authors could include more details about baselines reported on Table 1. Where these follow a similar text-based CoT pipeline?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The limitations were properly addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback! We are encouraged that you acknowledge the originality, quality, clarity, and significance of our work. We address your concerns as follows, and hope that they can clarify your concerns, and hope that you can improve the rating after seeing the responses!
**1. Clarity: The reviewer found the title and general positioning of this work slightly misleading. Visual SKETCHPAD naturally would refer to chain of thought framework with that includes generalised sketching capabilities. Troughout this work sketching soley refers to drawing of auxiliary lines for geometric problems and is only one of the multiple tools that the proposed framework makes use of for visual artefact generation.**
**Answer:** We respectfully point out that this is a false statement. For vision problems, our work draws numbers and bounding boxes on objects. This is similar to the human sketching process. We circle things on an image and put numbers on it to help reasoning. Even for math problems, we believe that drawing out math functions and graphs is also a sketching process. We as humans often do that on our sketchpad during math examples.
**2. Origniality: the main concern of the reviewer is with the extent of the novelty of this work. The single differentiating factor compared to standard pipelines is the incorporation of visual artefacts on intermediate CoT steps. Incoorporation of visual reasoning information for LLM reasoning cannot be considered a purely novel proposition (for example [a] or [60]). Even though the proposed approach departs from these methods by iteratively updating the context with newly formed visual artefacts, the reviewer is still skeptical w.r.t the extent of the contribution for NeurIPS standards.**
**Answer:** We again respectfully disagree with the reviewer. Updating the visual context during reasoning is the *key novelty* unique to our work, which brings significant performance gain. Prior works, for example [a], [60], VisProg, and ViperGPT, only do reasoning on text, and the performance is not great. For example, [a] uses GPT-3 and gets 44.6% on OK-VQA. An earlier work, [b], directly prompt GPT-3 with image captions, get 48% by directly prompting GPT-3 with image captions. In ViperGPT, the work that gets the best zero-shot result, but are still far from supervised state of the art. We have been following this line of work, and find the key thing missing is that the LLM cannot make new plans based on new visual contexts. Our method is the first work in this line of research that greatly outperforms all existing state of the art and unleashes the power of the best multimodal LLMs.
Sidenote: Thanks for bringing up [a]. We will cite it in our final version.
[b] Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., & Wang, L. (2022). An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA. AAAI.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your time and effort in reviewing our paper! If you have any additional questions for discussion, we would be more than happy to address them. We will make every effort to revise our paper based on the reviewer's feedback and suggestions.
---
Rebuttal Comment 1.2:
Title: Answer to author rebuttal
Comment: Thanks for the rebuttal. The reviewer still disagrees with the general positioning of the paper and the amount of the technical novelty. However, as pointed out in the review, this paper showcases some important insights that can help the community in re-iterating over it. The reviewer maintains the initial rating of 5. | Summary: This paper studies the problem of using language models to generate code to draw for intermediate reasoning. Particularly, the idea of chain-of-thought is applied to facilitate the reasoning process, such that the auxiliary "drawings" enhance the LM's reasoning ability. The proposed method is tested both on math and vision tasks, showing promising results.
Strengths: - Chain-of-thought (CoT) is introduced in the reasoning process. Moreover, visual elements, such as lines, boxes, marks, and segmentation maps, are used for intermediate reasoning steps. In contrast, most previous works only use text.
- The overall pipeline is reasonable, and good results are achieved on different problems, including math and vision tasks.
- The paper is well-organized and written.
Weaknesses: - The overall idea of using LM to generate code to manipulate images is not new. As the authors pointed out, VisProg is one of the most similar works. Personally, this work could be an incremental work based on VisProg by introducing chain-of-thought and applying multimodal reasoning.
- It is very confusing to use the word "sketches" since the drawings are straight lines, boxes, or marks, which are in fact irrelevant to sketches from my view.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is there any evidence that LMs can change the plan given the intermediate visual outcomes during reasoning?
- Is there an in-context learning for LM-based code generator?
- Is that possible to equipt CoT to VisProg (using the same specialist vision models of this work), thus achieving similar performances on math and vision problems?
- To solve the geometry problem, the proposed model is going to generate auxiliary lines. Is there GT that can be used to quantitatively assess the correctness of the lines?
- In Table 3, VisProg performs poorly and the reason claimed is the errors from vision modules. Why Sketchpad did not suffer from this issue?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback! We are honored that you believe the pipeline is reasonable, achieving good results, and the paper well-written. We address your questions below. Hope that we addressed your concerns, and we would be grateful if you could consider improving the rating after seeing our responses!
**1. The idea is not new. VisProg is a similar work. Personally, this work could be an incremental work based on VisProg by introducing chain-of-thought and applying multimodal reasoning.**
**Answer:** Great to hear that you are also following this direction! We have also followed this direction for a long time, and believe Sketchpad fixes the key pain point of VisProg/ViperGPT. The huge performance gain demonstrates the significance of our work. This work actually starts with the authors carefully investigating ViperGPT's trajectories when solving OK-VQA. We find that many LLM-generated programs are wrong, and the vision tools break frequently. We realize that the LLM can be much more powerful if it can investigate the intermediate visual artifacts during the execution of the programs. We further develop this idea and find that it also applies to math and geometry problems. The key innovation is not about vision tool-use, but about multimodal reasoning: LLMs should think step by step across modalities, just like humans do when they are drawing on sketchpads.
**2. It is very confusing to use the word "sketches" since the drawings are straight lines, boxes, or marks, which are in fact irrelevant to sketches from my view.**
**Answer:** We understand your concerns about our terminology. We use the term ‘sketch’ as a metaphor for the process of humans drawing things on a sketchpad while thinking. We are also open to advice on new terminology!
**3. Is there any evidence that LMs can change the plan given the intermediate visual outcomes during reasoning?**
**Answer:** Yes, there is substantial evidence supporting this. For example, in the dataset V*,for 26% of the examples, GPT-4o find that GroundingDINO fails to detect small objects, and decides to use a sliding window to do a more careful search. Figure 1 (d) is another good example. After using segmentation, GPT-4o decides to use depth estimation to further confirm the answer. Furthermore, the performance gap between VisProg and our method in table 3 quantitatively shows the difference. VisProg can be viewed as a version in which the plan cannot be changed. And its performance is much lower than ours (e.g., 17% v.s. 86% on MMVP).
**4. Is there an in-context learning for LM-based code generator?**
**Answer:** Yes. For computer vision tasks, all tasks share the same prompt, and the prompt contains 6 in-context examples. For math tasks, there are 5 in-context examples.
**5. Is that possible to equipt CoT to VisProg (using the same specialist vision models of this work), thus achieving similar performances on math and vision problems?**
**Answer: ** In our VisProg experiment, to make a fair comparison, we replaced the LLM to GPT-4o/turbo, and then replaced the vision experts with the ones used by our method. It is way worse than our method, as shown in Table 3. So, the short answer is no, VisProg cannot achieve similar performance. The key difference is that in VisProg, the LLM makes the plan after seeing the textual query, without any visual context. We manually inspect VisProg’s traces, and find that many plans are not reasonable and not robust. For example, GroundingDINO often makes mistakes, and the wrong bounding boxes destroy the VisProg programs.
**6. To solve the geometry problem, the proposed model is going to generate auxiliary lines. Is there GT that can be used to quantitatively assess the correctness of the lines?**
**Answer:** Good question! The short answer is no because it is really hard to generate GT for a geometry problem. There are often multiple solutions for the same question, using different auxiliary lines that get the same correct answer. To evaluate the correctness of the lines, we have to show the reasoning trajectory of GPT-4o to humans and let them decide. As shown in L293-297, humans find that for 80% of the cases, the auxiliary line is reasonable.
**7. In Table 3, VisProg performs poorly and the reason claimed is the errors from vision modules. Why Sketchpad did not suffer from this issue?**
**Answer:** Great question! The reason is that Sketchpad includes visual results as part of the reasoning process, allowing GPT-4 to inspect the results and find the errors from vision modules. For example, GroundingDINO may make mistakes for object detection. In VisProg, since the plan is not changed, there is no way to fix it. For Sketchpad, GroundingDINO will also visualize its results, with bounding boxes drawn around the objects. GPT-4 can easily figure out which boxes are wrong by inspecting the visualized results.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your time and effort in reviewing our paper! If you have any additional questions for discussion, we would be more than happy to address them. We will make every effort to revise our paper based on the reviewer's feedback and suggestions. | Summary: The Visual SKETCHPAD framework integrates sketching capabilities into multimodal language models, enabling them to iteratively draw, plan, and reason using visual artifacts - similar to how humans leverage sketching to facilitate problem-solving. Unlike prior approaches that relied on text-to-image models, Visual SKETCHPAD allows language models to directly create visual elements like lines, boxes, and marks, and can even incorporate specialized vision models to enhance the sketching process. Experiments across a range of math and complex visual reasoning tasks demonstrated that Visual SKETCHPAD substantially improves performance.
Strengths: 1. Comprehensive Experiment: Evaluated on a wide range of math and complex visual reasoning tasks and achives decent results, which is good.
2. Good Idea: Demonstrates the value of integrating sketching and visual reasoning capabilities into multimodal language models.
Brings language models closer to how humans naturally think and solve problems using a combination of language and visual artifacts.
3. Well delivery. The paper is well-written.
Weaknesses: More discussion about the robustness (repeatibility) is needed, since a common problem for these commercial LVLMs is the instability. Does the method alleviate the invalid responces? How robust is it? Does multiple re-try helps to improve the performance?
Technical Quality: 3
Clarity: 4
Questions for Authors: More explaination about the human evaluation is needed. I noticed the author have included the user study. How exactly the user study is conducted? The author is suggested to elablorate this.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your kind and insightful feedback! We are honored that you believe Sketchpad is a great idea. We address your questions as follows:
**1. More discussion about the robustness (repeatibility) is needed, since a common problem for these commercial LVLMs is the instability. Does the method alleviate the invalid responces? How robust is it? Does multiple re-try helps to improve the performance?**
**Answer:** This is a great question. Regarding invalid responses, we observe that GPT-4-turbo and GPT-4o rarely produce invalid responses for our datasets. There are a few (<5) examples per dataset blocked by safety shields. For robustness, we set the temperature of decoding to 0 to reduce randomness, but still notice some instability in the OpenAI API. To address this, we conducted 3 runs per task. For instance, on BLINK depth, the GPT-4o baseline has a standard deviation of 1.5%, while with visual sketchpad, it's 1.2%. We will update the final version with mean and variance for 3 runs for each task.
**2. More explaination about the human evaluation is needed. I noticed the author have included the user study. How exactly the user study is conducted? The author is suggested to elablorate this.**
**Answer:** For human evaluation, we present the reasoning steps of GPT-4o (interleaved between text and images) to two human subjects. In the geometry task, we ask the annotators to respond with "yes" or "no" to the question: "Would you draw the same auxiliary lines to answer the question?" For computer vision tasks, we ask: "Are the visual reasoning steps reasonable?" We will include an additional human evaluation section in the appendix to provide more comprehensive details.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your time and effort in reviewing our paper! If you have any additional questions for discussion, we would be more than happy to address them. We will make every effort to revise our paper based on the reviewer's feedback and suggestions. | Rebuttal 1:
Rebuttal: We appreciate all reviewers for their timely and positive feedback. We are encouraged that the reviewers believe visual sketchpad is “a good idea” (Reviewer BpTp), “interesting and effective” (Reviewer jSat), with “originality” and “significance” (Reviewer 2LyD). Also, all reviewers believe that the experiments are “comprehensive” and achieved great results; the paper is “well-written” with “great clarity”. (Reviewer BpTp, nzNU, 2LyD, jSat) We believe visual sketchpad is a novel and effective framework to fully unleash the power of multimodal LLMs, “bringing LMs closer to how humans naturally think” (BpTp) by solving complex problems with a combination of text and visual artifacts.
We will address each reviewer’s questions below. And we are happy to address any further comments from reviewers. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs | Accept (poster) | Summary: The paper introduces MR.BEAN, a benchmark designed to evaluate the meta-reasoning capabilities of large language models (LLMs). This benchmark focuses on the models' ability to detect and correct errors in reasoning steps, addressing the limitations of existing outcome-based benchmarks.
Strengths: - The paper is well-articulated, providing clear examples that effectively demonstrate the dataset's composition and intended utility.
- This work contributes a new benchmark for identifying and correcting errors in reasoning steps rather than just the final outcomes.
- The benchmark covers diverse topics and is paired with reasonable metrics.
- A comprehensive set of experiments is conducted that can tell performance disparities among various LLMs.
Weaknesses: - The $ACC_{reason}$ metric shows limitations in consistency. Its dependency on the judgments of different LLMs or human evaluators could lead to variability in scoring.
- The weights assigned to different metrics might need recalibration when new models are tested or when the validation set is updated.
- A detailed report on each metric's individual performance and impact is lacking.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Since the MR metric is newly proposed, would it be reasonable to incorporate human evaluations to support its credibility and relevance? By comparing the model outputs against human judgments on at least a sampled subset, it can provide empirical evidence on how well the MR metric aligns with human reasoning and offer insights into the absolute performance of models.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your kind review and insightful questions. We are more than happy to address them as follows:
> **W1: The ACC_reason metric’s dependency on the judgments of different LLMs or human evaluators could lead to variability in scoring**
We would like to argue that due to the careful design of our evaluation mechanism, the automatic scoring of error reasons is both robust and economically feasible:
- **Multiple annotators**: During the annotation stage, we collected multiple annotations for the first error reasons and potential error rectification from different annotators who agreed on the solution's correctness and the first error step.
- **Proxy Model Evaluation**: Based on the ground truth **annotations collected from various perspectives**, the proxy language model (e.g., GPT-4-Turbo) then examines the error reasons provided by evaluating models. Given the question/solution pair and information regarding the first error step, error reasons, and rectification, **the potential flaws of the error reasons provided by the evaluating models will be easy to diagnose under contrast**.
- **ACC_reason robustness**: Below is the scoring of error reasons sampled from our evaluation results. For the same set of error reasons collected in each subject, three different models made their predictions on the correctness/incorrectness. We can clearly see **the consistency of their predictions among three models over questions in all subjects**. Since the MR-Score is a weighted metric, the final score variability is less than 1 percent in total.
| Model | Coding | Physics | Biology | Math | Medicine | Chemistry | Logic |
|--------------|--------|---------|---------|------|----------|-----------|-------|
| gpt-4-turbo | 83/55 | 137/15 | 164/11 | 305/46 | 194/25 | 166/27 | 192/16 |
| deepseek_coder | 100/38 | 145/7 | 167/8 | 321/30 | 200/19 | 172/21 | 193/15 |
| Qwen2-72B | 99/39 | 142/10 | 167/8 | 312/39 | 195/24 | 172/21 | 200/8 |
- **Agreement Rate**: As mentioned in lines 207-209, **the agreement rate between manual annotations and the GPT-4 predictions over 100 samples randomly collected from all subjects is 92%.** This high agreement rate also supports the reliability of our evaluation and therefore avoids the manual annotation of potentially 138,000 problems (6,000 benchmark sizes times 23 models evaluated).
> **W2: The weights assigned to different metrics might need recalibration.**
We would like to clarify that, for consistency consideration, the weights assigned to different sub-metrics are not supposed to be recalibrated in the future, even when new models are tested. This is because:
**The discriminative ability of final MR-Scores is not sensitive to the weights**: We performed a comprehensive grid search over 23 models we evaluated. The results have shown that even for the large model coverage, the variance of MR-Scores across different models (representing the differentiability of MR-Score) does not change very much for different combinations of weights. We therefore considered both the difficulty levels of all three subtasks and their progressive nature, and selected the weighting schema that assigned increasing weights to solution correctness prediction, first error step determination, and error reason explanation.
**The current weighting ratio strikes a good balance between interpretability and differentiation**: The traditional reasoning accuracy assigns similar scores to the SOTA LLMs, for example, GPT-4-Turbo, Deepseek-v2-236B, and Mistral-Large achieve 86.4%, 78.5% and 81.2% respectively in MMLU but score 43.2%, 29.4% and 21.3% in our benchmark. The increased difference in performance showcases our superior differentiability.
> **W3: Missing detailed report on each metric's individual performance.**
We apologize for not including detailed sub-task performance tables due to limited space. We agree that this information is beneficial for interpreting model behaviors. Since the rebuttal is limited to 6,000 words, **we have moved the sub-task performance tables into the PDF posted under the global reply section of this rebuttal**. Please kindly refer to the tables there for more information.
> **Q1: Incorporate human evaluations to support the credibility and relevance of the MR metric.**
As detailed in lines 207-209, we have reported the **92% of human-model agreement rates** on the error reason scoring. Below is the exact detail of our setup:
We randomly collected 100 data instances where the evaluating model correctly identified the solution correctness and the first error step over all the subjects. We then manually examine whether or not the proxy scoring model (e.g. GPT-4-Turbo-2024-04-09) has correctly scored the error reasons of the evaluating models. Below is the detailed composition of the ratio that the author agrees with the proxy scoring model:
| Coding | Physics | Biology | Medicine | Chemistry | Logic | Math |
|--------|---------|---------|----------|-----------|-------|-------|
| 7/8 | 12/13 | 21/21 | 12/12 | 15/17 | 15/16 | 10/13 |
The annotation time varies significantly across subjects, as some problems like coding and chemistry might take more than 10 minutes to evaluate, while subjects like biology will be easier to evaluate. We sincerely hope the author-model agreement rates and the agreement tables among the three different models above can relieve your concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal and I encourage you to include further clarification and results into your work.
---
Reply to Comment 1.1.1:
Title: Reply to reviewer JiJF
Comment: Dear reviewer JiJF:
Thanks for your kind encourgement and we will make sure to include the clarification enlighted by your comments into our work. If you believe your concerns are addressed by our explaination, we would be very appreciative if you can kindly consider updating the ratings for us.
We wish you a wonderful day ahead.
Sincerely
Authors | Summary: This paper introduces a new benchmark for evaluating the reasoning capabilities of large language models (LLMs). Current methods primarily focus on final outcomes and not sufficiently capture the intricacies of the reasoning process. To address this issue, this paper propose MR.BEAN, a process-based benchmark that demands meta-reasoning skills from LLMs, requiring models to identify and analyze potential errors in automatically generated reasoning steps.
The authors conducted an extensive analysis of various LLMs using MR.BEAN, revealing significant limitations and previously unidentified weaknesses in their reasoning abilities. They found that while many LLMs can generate correct answers, they struggle to pinpoint and correct errors in the reasoning process. The paper also discusses the potential for improving reasoning abilities through techniques like the use of high-quality synthetic data.
Strengths: 1) The paper introduces a novel benchmark, MR.BEAN, which focuses on meta reasoning—a higher-order thinking skill. This design pushes beyond traditional outcome-based evaluations to assess the reasoning process itself. And MR.BEAN covers a wide range of subjects, including physics, chemistry, logic, coding, and more.
2) The benchmark's questions and error analyses are curated and annotated by human experts, ensuring a high level of quality and relevance in the evaluation process.
3) The paper's evaluation of various LLMs reveals previously unidentified weaknesses in their reasoning abilities, providing valuable insights for researchers and developers. The benchmark's application to a diverse array of models, from small to large, open-source and closed-source, provides a broad comparative analysis that can inform future development in AI reasoning.
Weaknesses: 1) The paper may not provide a thorough comparison between the model's automatic annotations and human annotations. Without such validation, it is challenging to assess the reliability and accuracy of the model-generated annotations.
2) The concept of meta-reasoning typically involves instructing models on how to reason, which may include decision-making during the reasoning process. The paper primarily analyzes the reasoning steps after they have been generated, which might not fully align with the proactive aspect of meta-reasoning.
3) The paper could provide more transparency regarding the annotation process, such as the total number of annotators involved, the total time spent on annotations, and the average number of samples annotated per person per day. This information is crucial for understanding the scalability and efficiency of the annotation process.
3) The paper may not fully address how the dataset can be used to validate an LLM's reasoning capabilities on new, unseen examples. Understanding the generalizability of the findings is essential for assessing the true breadth of an LLM's reasoning skills.
4) The paper could benefit from including specific test cases using GPT-4 to demonstrate the benchmark's effectiveness in identifying strengths and weaknesses in state-of-the-art models. This would provide a clearer picture of how GPT-4 performs against the benchmark and highlight the benchmark's utility.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind review and insightful comments. We are committed to addressing your concerns and providing clarifications.
> **W1: Missing a thorough comparison between the model's automatic annotations and human annotations.**
(Since all the instances in MR. Bean are annotated manually, we hereby assume that you are referring to the automatic scoring of error reasons generated by the evaluating models)
Thank you for highlighting the importance of manual examination compared to automatic evaluations. We fully agree that manual examination can often provide a higher level of trust and thoroughness. However, we have designed our evaluation mechanism to provide robust and reliable automatic annotations of the model responses.
- **Multiple annotators**: During the annotation stage, we collected multiple annotations for the first error reasons and potential error rectification from different annotators who agreed on the solution's correctness and the first error step.
- **Proxy Model Evaluation**: Based on the ground truth annotations collected from various perspectives, the proxy language model (e.g., GPT-4-Turbo) then examines the error reasons provided by evaluating models. Given the question/solution pair and information regarding the first error step, error reasons, and rectification, the potential flaws of the error reasons provided by the evaluating models will be easy to diagnose under contrast.
- **Agreement Rate**: As mentioned in lines 207-209, the agreement rate between manual annotations and the GPT-4 predictions over 100 samples randomly collected from all subjects is **92%**. This high agreement rate supports the reliability of our evaluation and therefore avoids the manual annotation of potentially 138,000 problems (6,000 benchmark sizes times 23 models evaluated).
> **W2: Not aligned with the proactive aspect of meta-reasoning.**
Your observation about the proactive nature of meta-reasoning is insightful. Since meta-reasoning is not yet a well-defined and recognized concept, we follow the definition of MR-GSM8K and MR-MATH to define meta-reasoning as an evaluation process that scores the reasoning steps. This definition, which tasks the evaluating models to reason about different reasonings, is different from the proactive prompting instruction methods in the following ways:
- **Evaluation Mechanism**: Even though the language models might adopt a more systematic and structured reasoning paradigm to answer the question, according to the proactive definition of the meta-reasoning paradigm you suggest, its effect is measured by the metrics that examine **the computation results only**. The quality and correctness of the intermediate reasoning steps are in no way guaranteed or reflected in the final metrics numbers. On the contrary, our evaluation mechanism provides an effective way that dissect the reasoning process into meticulously annotated subtasks, revealing the key properties of meta-reasoning ability.
- **Teacher Role**: By shifting the model's role from a student generating answers to a teacher scoring solutions, **our mechanism forces the model to actively reflect on and critique conditions, assumptions, and logic, examining potential outcomes counterfactually.** All of the above are essential for a more robust reasoning model. In some sense, our definition of meta-reasoning is orthogonal and complementary to the proactive reasoning paradigm you suggested.
> **W3: Annotation details**
Based on your suggestion, we provide the annotation details as follows. We are committed to include these details in the revised version of the paper.
- **Annotator Qualification & Training**: As mentioned in lines 152-157, our annotators hold a minimum of a bachelor's degree. Each annotator is required to read through the annotation guidelines listed in Appendix G before completing a trial labeling process. The selection of annotators is based on their performance on a balanced small hold-out set of problems for each subject. Every annotator is ensured to be paid above the local minimum wage rate.
- **Initial Annotation**: As mentioned in section 3.4, each question was labeled by two different annotators. Inconsistencies in solution correctness or the first error step were identified and reviewed by a quality controller for arbitration.
- **Quality Control**: In the final quality control phase, 10% of the problems were randomly sampled and reviewed by meta controllers (authors). The author-annotator agreement rate had to exceed 90% for annotations to be accepted.
- **Annotator Details**: Each subject usually comprised 5-6 annotators with two project supervisors for quality control. The annotation process took approximately three weeks, followed by two weeks for quality control and resolving disagreements. Annotators generally handled around 20-30 questions per day, though this varied slightly depending on the difficulty level of the subject matter.
> **W4: Validate an LLM's reasoning capabilities on new, unseen examples.**
We would like to clarify that one of the core novelties and contributions of our meta-reasoning paradigm is that someone can apply it to transform any “student answering” benchmark to a “teacher scoring” benchmark. **By successfully applying our benchmark on top of the well-recognized but already performance-saturated benchmarks like MMLU, LogiQA, and MHPP, we have observed a substantial performance drop for SOTA models** (e.g. Mistral-Large achieved ~80% accuracy in MMLU but scored 21.3 in our benchmark). This supports our point that the meta-reasoning evaluation pipeline is a challenging mechanism that uncovers the holisticness and comprehensiveness of language models regarding the mastery of domain knowledge and its application. We believe this paradigm will significantly contribute to the community, whether applied to existing datasets or new compilations. | Summary: This paper introduces MR.BEAN, a comprehensive benchmark for evaluating meta-reasoning capabilities of large language models (LLMs). Comprising 6,006 questions across various subjects including physics, chemistry, logic, coding, and more, MR.BEAN requires LLMs to analyze and correct errors in automatically generated reasoning steps. The benchmark is meticulously constructed using a three-step annotation process involving answer correctness evaluation, error step identification, and error reason analysis.
The main contributions of this work are:
1. A novel, large-scale benchmark for meta-reasoning evaluation covering diverse subjects and reasoning types.
2. A rigorous methodology for creating and annotating meta-reasoning questions, ensuring high-quality data.
3. Comprehensive evaluation of 15 LLMs on the benchmark, revealing limitations and weaknesses in their reasoning abilities.
Strengths: 1. Comprehensive and well-organized dataset: MR.BEAN covers many subjects (p.3, Table 1) and offers a broad assessment of LLM meta-reasoning capabilities across diverse domains. The paper is structured clearly, with detailed explanations of the dataset creation process, evaluation metrics, and experimental results.
2. Novel meta-reasoning focus: By requiring LLMs to identify and correct errors in reasoning, MR.BEAN offers a unique perspective on evaluating AI reasoning capabilities, going beyond traditional outcome-based assessments.
3. Extensive empirical study and analysis: The authors evaluate different LLMs on MR.BEAN and have tested different prompting methods to comprehensively analyze current model capabilities and reveal interesting limitations in their reasoning abilities.
Weaknesses: 1. Validation of meta-reasoning specificity: While the paper describes the evaluation metrics in detail (p.5, l.185-197), it is still not clear such metrics measures meta-reasoning abilities rather than general language understanding or domain knowledge.
2. Prompt sensitivity: The paper doesn't adequately address the potential impact of prompt design on the generated solutions and errors. Given that LLMs are known to be sensitive to prompt wording, this could significantly affect the nature and distribution of errors in the dataset.
3. Lack of annotation details: While the paper mentions a three-stage annotation process (p.4, l.116-135), it lacks specific details on annotator qualifications, training, and inter-annotator agreement rates. This information is crucial for assessing the reliability and consistency of the annotations.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you provide more details on the annotation process, including inter-annotator agreement rates and resolution of disagreements?
2. What steps were taken to ensure that MR.BEAN specifically measures meta-reasoning abilities and not merely language understanding or domain knowledge?
3. How sensitive is the dataset to the specific prompts for generating solutions and errors? Did you experiment with different prompt formulations, and if so, how did this affect the resulting dataset?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind review and insightful and to-the-point comments. We are committed to addressing your concerns and providing clarifications.
> **W1: How does our evaluation mechanism measure meta-reasoning abilities rather than general language understanding or domain knowledge**
We believe language understanding and domain knowledge are inseparable and essential components of reasoning. Examples include MMLU, LogiQA, and MHPP. These well-recognized reasoning evaluation benchmarks require substantial language understanding ability and application of domain knowledge. However, as outlined in our abstract and introduction, **these benchmarks primarily use a result-oriented evaluation method rather than a process-oriented one**. This can be flawed due to incorrect understanding and reasoning.
Our benchmark addresses this limitation by proposing a meta-reasoning framework that transforms evaluating models from the role of students generating results to the role of teachers scoring solution processes. **To effectively score these processes, evaluating models must actively reflect on and criticize conditions, assumptions, and logic, and examine potential outcomes counterfactually.** The above capabilities are essential for a more robust and trustworthy reasoning mechanism.
Therefore, our meta-reasoning paradigm provides a challenging yet feasible evaluation pipeline that forces models to "reason about reasoning" (i.e., score candidate solutions), which we term meta-reasoning. This paradigm aims to evaluate the reasoning in a more fine-grained approach (measured in a series of subtasks and represented under a unified metric), and to do so it indeed requires advanced language understanding and mastery of domain knowledge as you suggested.
> **W2: The paper doesn't adequately address the potential impact of the prompt.**
We agree that language models can be susceptible to different prompting methods.
- **Response generation prompt design**: We fully agree with you that language models are susceptible to prompt wordings and sometimes it can affect the distribution of reasoning errors in the evaluation benchmark. With this in mind, **our design of response generation prompt are following the general best practices of prompt engineering guidelines.** As illustrated in Figure-10 in the Appendix, our prompt has the following key attributes: (1) Clear task description and background information, (2) Persona adoption as an experience scoring teacher, (3) Divide-and-conquer fashion that splits the goal into sub-tasks, (4) Allowing “time” for the model to think in step by step manner, (5) Clear separation of different parts of information, and (6) Specific requirements on the return format.
The final prompt presented in Figure-10 is actually the iterated version after considering the prompt wording effects and is proven effective as explained in lines 219-222.
- **Few-Shot In-Context Learning**: In section 6.1, we experimented with this method and observed performance fluctuations across models of different sizes. We suspect lengthy demonstrations might confuse the models. To prove our point that length does affect the model performance, we conducted a Pearson correlation analysis. In the results, we found a Pearson correlation coefficient of -0.58 between model performance and question length and -0.29 on solution length. This negative correlation supports our hypothesis above.
- **Self-Reflect Prompting Method**: In section 6.2, we adopted a response-examine-refine pattern. This did not significantly boost performance, as models often switched their decisions from correct to incorrect and vice versa.
- **Ground Truth Solution Correctness**: We provided hints of ground truth solution correctness to see if models could better identify error steps and reasons. The positive results indicate that correct prior knowledge in the prompt can indeed affect outcomes.
Overall, while prompting methods can influence performance, **our benchmark provides a robust evaluation mechanism that remains relatively stable** unless influenced by explicit hints.
> **W3: The paper lacks specific details on annotator qualifications, training, and inter-annotator agreement rates.**
Based on your suggestion, we provide the annotation details as follows. We are committed to including these details in the revised version of the paper.
- **Annotator Qualification & Training**: As mentioned in lines 152-157, our annotators hold a minimum of a bachelor's degree. Each annotator is required to read through the annotation guidelines listed in Appendix G before completing a trial labeling process. The selection of annotators is based on their performance on a balanced small hold-out set of problems for each subject.
- **Initial Annotation**: As mentioned in section 3.4, each question was labeled by two different annotators. Inconsistencies in solution correctness or the first error step were identified and reviewed by a quality controller for arbitration.
- **Disagreement Resolution**: We did not adopt majority voting due to the objective nature of our questions. Instead, a senior supervisor reviewed all questions with disagreements to resolve ambiguities (therefore the inter-annotator agreement rate is not applicable here).
- **Quality Control**: In the final quality control phase, 10% of the problems were randomly sampled and reviewed by meta-controllers (authors). The author-annotator agreement rate had to exceed 90% for annotations to be accepted.
- **Annotator Details**: Each subject usually comprised 5-6 annotators with two project supervisors for quality control. The annotation process took approximately three weeks, followed by two weeks for quality control and resolving disagreements.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thanks for your detailed response; I would like to keep my rating positive for this paper.
---
Reply to Comment 1.1.1:
Title: Response to the reviewer TVR6
Comment: Dear reviewer TVR6, thanks for your kind review and insightful questions. We are happy that you find the reply helpful. We wish you all the best : ) | Summary: This paper proposes a meta-reasoning benchmark for evaluating the solutions generated by a large-language model (LLM) to shift the focus more to process-based evaluation of an LLM's reasoning abilities rather than outcome-based evaluation.
*Evaluation*:
On a variety of question-solution pairs, the model is asked to score its own generated solution for correctness, to identify the first erroneous step, and if these are correctly identified, the reasoning of it solution is evaluated by GPT-4. These three are used to compute three metrics respectively-- the MCC for a binary classification of the solution correctness, the ACC_step to judge the number of correctly predicted first-error steps as compared to the total number of incorrect solutions, and the ACC_reason to judge the number of correctly predicted first-error steps and error-reasons as compared to the total number of incorrect solutions. These three are combined to form an MR-score that is used to compare different models.
*Dataset*:
The question-solution pairs (~6k in all) themselves are formed such that the questions are sampled from MMLU (arithmetic reasoning), LogiQA(logical reasoning), and MHPP (code-based reasoning) and the solutions are generated via chain-of-thought prompting by GPT-3.5-Turbo-0125, Claude-2, and Mistral-Medium. The annotations for these question-solution pairs are annotated by human annotators-- these annotations of solution correctness, first error-step, and the error-reason are used for evaluation.
Strengths: - The authors highlight the importance of more carefully evaluating the solutions generated by LLMs to complex problems rather than just comparing their accuracy which is a notable contribution to the body of work of improving evaluation of LLMs.
- The annotation process followed for experimental evaluation is thorough and multi-step for quality assurance
- The authors raise an interesting question about problem solving dynamics, which can be potentially explored further in future work on LLMs
Weaknesses: These questions/remarks might also have arisen due to my lack of proper understanding, so I am willing to increase my score if these can be clarified:
- My main issue is with the mixing of reasoning based and accuracy based evaluation into a singular score. What is the reasoning behind combining the three metrics (MCC, ACC_step, and ACC_reason) into a single score (MR-Score)? Does this not affect the motivating insight that models might have a high MCC but a super low ACC_step and an even lower ACC_reason? Is there somewhere the three metrics can be viewed individually also so it's clearer how the models vary along the three-- right now, it is not clear to me by looking at the MR-scores in Table 2 how the 15 models actually fare when it comes to identifying the correct error-reason, especially when the MR-scores for the models are in the same ballpark.
Writing:
- (pre-existing work): it's not clear to me what the real differentiating aspect of MR. Bean is vis-a-vis MR-GSM8K and MR-Math. MR-GSM8K and MR-Math also consider the first-error reason? So, is it that they are math-reasoning based, and MR. Bean now just combines multiple datasets to essentially follow the same evaluation pipeline? If so, this should be clearly mentioned. They don't just go a step further, that is the furthest step gone to, even considering this work (if that is indeed the case).
- The abstract makes it seem like the MR-score is a metric designed by the authors in this work, although it's already been proposed "*Through our designed metrics*" which is somewhat over-selling the work.
- (minor) typo: A.2 Negative Societal Impacts
- The connection between task difficulty and reasoning capability can be explored further-- I am assuming ACC_reason on more difficult tasks would be comparably worse than the MCC, which would be the actual assessment of reasoning capabilities. Also, from Fig 2, it seems that Logic based tasks are the most difficult, then wouldn't it be fair to include them in the comparison too or perform the comparison of task difficulty not just based on high-school/college, but also take into consideration how the model actually interprets the task difficulty based on MR-score?
Experiments:
- what is the main takeaway on self-refine prompting since it seems to be in contrast to existing observations? How does the shift from incorrect to correct predictions depend on the model family and task? Is it seen only across MR-Scores or also in the ACC_reason (which should probably be affected more by model size?)?
- Is the difficulty of logic-based reasoning also because they contain the longest questions? is there any impact of question length on the model's reasoning ability, if not solution length?
Technical Quality: 3
Clarity: 3
Questions for Authors: Some questions for clarification:
- I might have missed this, so in Table 2: is the meaning of k the number of shots of prompting the model gets? If so, it would help to mention that in the caption since k is not mentioned in the text anywhere else.
what should be the take-away on self-correction?
- Is the difference between the Self-Refine prompting (section 6.2) and Solution Correctness Prior (section 6.3) the absence/presence of of the ground truth (resp.)? How do the two compare with each other?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The proposed benchmark also relies on step-wise solutions to evaluate reasoning which reduces its applicability and novelty as compared to existing work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and insightful comments. We hereby address your concerns below:
> **W1: Mixing metrics into a singular score**
Given the **interdependent and progressive** nature of the three tasks (MCC - ACC_step - ACC_reason), we can either choose to combine them **organically** by assigning weights that **consider both the differentiability and interpretability,** or simply report the final metric ACC_reason. Considering the three metrics are complementary in revealing how models perform in the respective dimension as you also suggested, we claim the necessity for MR-Scores as follows:
- **Unified Metric**: The MR-Score offers a unified and normalized metric that balances the difficulty levels of the three sub-tasks (lines 198-199). We conducted a thorough grid search to determine the weights for the sub-metrics and found that the MR-Scores are insensitive to relatively minor (e.g. ~0.1) adjustments on the weightings. We therefore chose the schema that assigned the greatest weight to ACC_reason and the least weight to MCC, as error reason reflects explicitly the model's understanding of the rationale behind the question. We believe **the current weighting ratio strikes a good balance between interpretability and differentiation**: For example GPT-4-Turbo, Deepseek-v2-236B and Mistral-Large achieve 86.4% ,78.5% and 81.2% respectively in MMLU but score 43.2%, 29.4% and 21.3% in our benchmark.
- **MCC and its correlations**: MCC is chosen because it effectively penalizes random or biased behaviors. Although given the progressive natures of the three tasks, the raw scores of MCC, ACC_step, and ACC_reason are generally in diminishing order, the correlations are not too high (corr(MCC vs ACC_step)=0.42, corr(MCC vs ACC_reason)=0.46). Therefore, all three metrics are necessary components to provide an accurate and thorough evaluation.
- **Individual metrics**: Due to space limit, **we report the sub-table into the pdf posted under the global response section of this rebuttal**.
> **W2: Differences among MR.Bean, MR-GSM8K and MR-Math.**
Our work builds upon previous efforts and introduces several key improvements.
- **Extensive domain coverage**: By recruiting and training a diverse group of domain experts as annotators and operating meticulous supervision and quality control, we extended MR.Bean to be a comprehensive benchmark covering coding, logic, medicine, and science, in addition to math, providing a broader evaluation spectrum.
- **Larger scale and increased difficulty**: While MR-GSM8K and MR-Math focus on primary and high school math competition, MR.Bean increased the difficulty to high school to graduate level. MR-GSM8K and MR-Math are both limited in dataset size, **while MR.Bean is 199.1% and 1,195% larger**, respectively. The scaling up and difficulty diversity both contribute to a more robust evaluation.
- **Rigorous and fine-grained annotations**: MR-Math only considered solution correctness and the first error step. MR-GSM8K additionally annotated the first error reason. However, each problem in MR-GSM8K only contains annotations of a single solution and a single error reason. In MR.Bean, **each problem is mapped to three solutions sampled from different SOTA models and the error reasons are provided by multiple annotators who agreed on solution correctness and the first error step. Additionally, we annotate the revisions of the first error step.** The revisions are ultimately integrated into the error reasons used by proxy LLMs as a reference to score the error reasons generated by evaluating models.
> **W3: Length and difficulties**
The table below shows the zero-shot average MR-Scores of SOTA models per subject sorted in the ascending order of question length (measured by the number of words). Other question statistics can be found in Table 1 of the paper. We can indeed observe a negative correlation between the performance and the question length.
| Model | Math | Chemistry | Biology | Physics | Medicine | Coding | Logic |
|----------------|-------|-----------|---------|---------|----------|--------|-------|
| (question length) | 44.3 | 48.1 | 56.3 | 66.6 | 88.7 | 140.1 | 154.8 |
| mistral-large | 21.53 | 24.49 | 21.48 | 24.27 | 16.34 | 21.8 | 15.1 |
| deepseek-chat | 32.18 | 32.52 | 29.97 | 32.44 | 26.54 | 34.18 | 23.58 |
| gpt-4-turbo | 44.28 | 41.71 | 44.77 | 42.54 | 38.89 | 50.99 | 30.98 |
For further quantitative investigations, we conducted a Pearson correlation analysis:
- **Question Length**: We found a high Pearson correlation coefficient of -0.58 between question length and model performance, indicating a relatively strong correlation.
- **Solution Length**: The correlation coefficient is -0.29, showing a lesser but still notable effect.
However, it is important to note that the difficulty of the subject question is a complicated factor and not solely dependent on the question length. For example, the difficulty of logic questions extends beyond their length to their inherent abstractness and the need for commonsense and real-world understanding, since its questions are sourced from the LogiQA dataset, originally collected from the Civil Service Entrance Exam (for case demonstrations, please refer to Appendix E-6).
---
Rebuttal 2:
Title: Continual of Rebuttal to Reviewer SioJ
Comment: > **W4: Takeaways on self-refine prompting**
The self-refine prompting experiment was designed to unveil if LLMs are capable of discovering their own reasoning flaws and effectively rectifying them. The result was indeed intriguing and therefore **we have decomposed the behavior of models across tasks in Appendix D and visualized it in Figure 9**.
We summarize our observation below:
- **Small Models** like Gemma-2B are too limited to perform effective self-reflection.
- **Competent Models** like GPT4-Turbo are confident in their initial decisions, hardly switching the decisions during self-reflection.
- **Intermediate Models** like Llama3-70B exhibit substantial changes during self-reflection, indicating a lack of consistency in their decisions. However, its change of decisions from incorrect to correct happens to be significantly higher in locating the first error step than examining solution correctness and explaining the error reason, therefore boosting the overall MR-Score by a large margin. We believe the lack of consistency does not necessarily indicate a more robust or advanced reasoning ability, despite the increase of the evaluation results.
-
**Conclusion**: Our results support the observation that LLMs generally lack effective self-refinement capabilities [1][2].
Ref:
[1] Large Language Models Cannot Self-Correct Reasoning Yet. 2024
[2] LLMs cannot find reasoning errors, but can correct them given the error location. 2024
> **W5: Clarifications**
- **Clarification of 'k'**: 'k' as in ‘k-shot’, represents the number of demonstrations in a prompt.
- **Typos**: Thank you for pointing out the typo in Appendix A2.
We will correct them in the revised manuscript.
> **Q: Difference between Self-Refine and Solution Correctness Prior**
Yes, the difference is indeed the absence/presence of the ground truth, specifically:
- **Self-Refine**: LLMs are asked to generate a three-step reasoning process, where the LLMs first answer the question directly and then self-critic on their own response. Finally, the LLMs generate the final refined response based on its original response and critics.
- **Solution-Correctness Prior**: The information that the provided solution is incorrect is included as part of the input prompt. LLMs are only asked to identify the first error step and explain the reason for it.
> **Limitation: The proposed benchmark relies on stepwise solutions to evaluate reasoning which reduces its applicability and novelty as compared to existing work.**
We would like to clarify that our evaluation mechanism ensures the robustness of the process. LLMs are tasked to determine the solution's correctness, first error step, and error reason. Even if the model made correct predictions on the solution correctness and first error step via a flawed reasoning process, such a process will generally lead to incorrect/incomplete error reasons.
**When the proxy scoring language models (e.g. GPT4-Turbo) are presented with the question/solution pair and the detailed error reasons provided by several annotators from different perspectives, the flawed error reasons generated by the evaluating models will be easy to diagnose via contrast.** This is supported by the high author-model agreement rates(92% as written in line 208) in the automatic error reason scoring process.
---
Rebuttal Comment 2.1:
Title: reply to authors
Comment: - Based on the attached pdf with the individual scores and the authors' explanation, I am convinced the MR score rightly captures process based evaluation as well (for eg, it shows a higher MR score when $ACC_{reason}$ score is higher whereas MCC is much lower). This was my biggest concern.
- However, based on the authors' response, now it is clearer to me that MR.Bean is indeed an incremental improvement over MR-GSM8K and MR-Math, employing the **same metric** and the **same evaluation protocol** on a larger dataset. In my opinion, that does not necessitate a new paper. The authors have also not edited the abstract that says *"Through our designed metrics"* to reflect that the MR score is a pre-existing metric which they use. Even so, I have already provided a positive score, which I cannot increase.
---
Reply to Comment 2.1.1:
Title: Reply to Reviewer SioJ
Comment: Dear Reviewer SioJ:
We are happy to see that your biggest concern is addressed. Since we are not allowed to edit the paper submitted during the rebuttal period, we will make sure to include that in our next iteration of our paper. We are truly grateful for your kind review and detailed comments!
Wish you a great day ahead : )
Sincerely,
Authors | Rebuttal 1:
Rebuttal: Dear PCs, SACs, ACs and reviewers:
We sincerely appreciate your thoughtful review and insightful comments. We have tried our best to address your concerns one by one in the corresponding rebuttal sections. If our answers satisfy your queries, we would be grateful if you could consider revising your final rating to a higher score.
# PDF for Subtasks Performance Table
**Attached is the PDF of the breakdown performance table for models in all four metrics** (MR-Score, MCC, ACC_step and ACC_reason) that we can not enlist in the individual rebuttal section due to the character limit. This table should hopefully bring some insights about our design choice of MR-Scores and the subtask metrics:
1. **Metric Robustness**: Due to the progressive nature of the difinitions of our subtasks (e.g. the success of subsequent tasks depends on the previous ones), we can see the diminishing trend in the scores of MCC, ACC_step and ACC_reason. However, thanks to the design of our evaluation mechanism and metrics, **the score rankings of different models stay in relatively stable order across metrics.** In another words, we have not observed any model that excels in determining the solution correctness (thus high in MCC) but is unable to explain the rationale behind it (e.g. low in ACC_reason).
2. **Task Difficulties**: As shown in the breakdown table, the ACC_reason metric is more discriminative than the MCC metric for competent models but vice versa for the less competent ones. This aligns with our intuition that generally more difficult questions are more discriminative for strong candidates and the weaker ones are simply incapable of solving them whatsoever. **This phenomenon could in part explain why in general the MR-Score is not very sensitive to minor change in the weightings assigned to the subtasks**, since the differentiability of the subtask metrics tend to reconcile with each other under different scenarios.
3. **Differentiability and Interpretability**: The weights of the MR-Score is ultimately decided by considering both the discriminative ability and the interpretability. To best differentiate models with different evaluation results, we conducted a thorough grid search to investigate the impact of the weightings. Since the weightings calcultated returned a few optimal instances, we deliberately selected the one that assign higher scores to more difficult tasks. **We believe the current weighting ratio strikes a good balance between interpretability and differentiation**: For example GPT-4-Turbo, Deepseek-v2-236B and Mistral-Large achieve 86.4% ,78.5% and 81.2% respectively in MMLU but score 43.2%, 29.4% and 21.3% in our benchmark.
Hope this table will help clarify some of your concerns regarding model performance and metrics design.
Wishing you all the best,
Sincerely,
The Authors
Pdf: /pdf/9d5f3bfffcdb54c3c7617b8afbb285f56aacbff3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization | Accept (poster) | Summary: This paper uses a pre-trained conditional diffusion model for antibody design. This diffusion model is fine-tuned using a direct energy-based preference optimization method, focusing on optimizing residue-level energy preferences to enhance the generation of antibodies with desirable structures and high binding affinities. The authors also compared their method with SOTA baselines on 55 cases from RAbD benchmarks and showed good results.
Strengths: 1. The proposed algorithm is novel and interesting. The authors combine diffusion models and DPO to solve the specific problem of antibody design.
2. The algorithm is clear. The authors both define the diffusion process and DPO formation with very clear definitions. Meanwhile, all the figures with protein structure are very clear and informative.
3. The results are convincing. AbDPO archives good results on CDR total energy compared to other methods.
4. The experiments part is very detailed. The authors conduct experiments on 55 of 60 antibodies on both their methods and the compared baselines.
Weaknesses: Lack of explanation of SE(3)-equivariant neural network. The author uses the diffusion model with such an equivariant neural network from Luo et al. Lacking such an explanation may hurt the understanding of the whole method.
**Minor:**
- There are some writings inconsistent. For example, equations under line 130 are not marked with numbers, but equations under line 138 have numbers. The ABDPO in everywhere this paper is written as \textsc{AbDPO}, but in line 242 is "ABDPO". In line 235, the first letter in pyRosetta should be capitalized since it is a proper noun. Checking these format issues will improve the consistency in the future published version.
Reference:
----------------------------------
Luo et al: Shitong Luo, Yufeng Su, Xingang Peng, Sheng Wang, Jian Peng, and Jianzhu Ma. 2022. Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why we need to use SE(3)-equivariant neural network?
2. What is the inference time for generating one sample or one batch?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong support! Please see below for our responses to the comments.
**Q1: Lack of explanation of SE(3)-equivariant neural network. The author uses the diffusion model with such an equivariant neural network from Luo et al. Lacking such an explanation may hurt the understanding of the whole method.**
A1: Thanks for your valuable suggestion. We will complete the explanation in the revision.
**Q2: There are some writings inconsistent.**
A2: Thanks for your careful inspection, we will fix it in the revision.
**Q3: Why we need to use SE(3)-equivariant neural network?**
A3: SE(3)-equivariance is a crucial property in protein design because the structure of proteins is independent to the observation view. Utilizing SE(3)-equivariance can ensure stable and predictable performance in the presence of nuisance transformations of the data input[1]. In addition, we chose to build our method on DiffAb, which uses equivariant neural networks. Other baselines, such as MEAN and dyMEAN, are also E(3)-equivariant. Therefore, we retained the equivariant neural networks to ensure a fair comparison. Nevertheless, we would like to emphasize that our proposed method is not specific to equivariant NNs and can be applied to other base models beyond equivariant NNs.
Reference:
[1] Fuchs, Fabian, Daniel Worrall, Volker Fischer, and Max Welling. "Se (3)-transformers: 3d roto-translation equivariant attention networks." Advances in neural information processing systems 33 (2020): 1970-1981.
**Q4: What is the inference time for generating one sample or one batch?**
A4: Our model could be regarded as an aligned model to the specific preference of a pre-trained model, so the inference time will not be changed. We record the detailed time cost in each antigen-antibody complex and show them below (batch_size=16, single A100 40G GPU).
The inference time consists of two parts: model inference time and .pdb file reconstruction time. The inference time is related to data length. Most antigen-antibody complexes are truncated to 256 residues when fed into the model, and the corresponding inference time is around 16-17s. The reconstruction time is determined by the total number of residues in the complex. For the complex with 500 residues, the reconstruction time of one batch of samples is around 2s. We list the detailed time cost below (the last few rows are dropped due to the limitation of space)
| pdb_id | infer_time (s) | data_length | reconst_time (s) | pdb_length |
|--------|----------------|-------------|------------------|------------|
| 1a14 | 16.73 | 256 | 2.36 | 612 |
| 1a2y | 16.76 | 256 | 1.29 | 351 |
| 1fe8 | 16.86 | 256 | 2.28 | 610 |
| 1ic7 | 16.76 | 256 | 1.25 | 349 |
| 1iqd | 16.73 | 256 | 2.09 | 563 |
| 1n8z | 16.75 | 256 | 3.6 | 1015 |
| 1ncb | 16.75 | 256 | 3.28 | 823 |
| 1osp | 16.73 | 256 | 2.44 | 682 |
| 1uj3 | 16.88 | 256 | 2.44 | 635 |
| 1w72 | 16.73 | 256 | 2.81 | 707 |
| 2adf | 16.73 | 256 | 2.28 | 615 |
| 2b2x | 16.71 | 256 | 2.19 | 607 |
| 2cmr | 16.8 | 256 | 2.35 | 604 |
| 2dd8 | 16.81 | 256 | 2.49 | 623 |
| 2vxt | 16.75 | 256 | 2.21 | 575 |
| 2xqy | 16.72 | 256 | 3.26 | 897 |
| 2xwt | 16.76 | 256 | 2.43 | 661 |
| 2ypv | 16.75 | 256 | 2.67 | 655 |
| 3bn9 | 16.81 | 256 | 2.45 | 662 |
| 3cx5 | 16.8 | 256 | 1.74 | 418 |
| 3ffd | 6.44 | 144 | 1.79 | 444 |
| 3hi6 | 16.78 | 256 | 2.34 | 606 |
| 3k2u | 16.81 | 256 | 2.46 | 655 |
| 3l95 | 16.77 | 256 | 3.5 | 661 |
| 3mxw | 16.76 | 256 | 2.26 | 585 |
| 3nid | 16.74 | 256 | 3.33 | 883 |
| 3o2d | 16.72 | 256 | 2.26 | 615 |
| 3rkd | 16.73 | 256 | 2.21 | 577 |
| 3s35 | 14.93 | 240 | 2.14 | 539 |
| 3w9e | 16.74 | 256 | 2.4 | 654 |
| 4cmh | 16.73 | 256 | 2.41 | 668 |
| 4dtg | 10.09 | 192 | 1.87 | 499 |
| 4dvr | 16.71 | 256 | 2.74 | 718 |
| 4ffv | 16.79 | 256 | 4.74 | 1144 |
| 4fqj | 16.72 | 256 | 2.68 | 731 |
| 4g6j | 16.81 | 256 | 2.19 | 578 |
| 4g6m | 16.85 | 256 | 2.31 | 583 |
| 4h8w | 16.76 | 256 | 2.82 | 760 |
| 4ki5 | 16.77 | 256 | 2.23 | 587 |
| 4lvn | 16.79 | 256 | 2.99 | 758 |
| 4ot1 | 15.79 | 248 | 2.08 | 558 |
| 4qci | 13.17 | 224 | 1.94 | 511 |
| 4xnq | 16.84 | 256 | 2.39 | 627 |
| 4ydk | 16.86 | 256 | 2.94 | 785 |
| 5b8c | 15.83 | 248 | 1.32 | 346 |
| 5bv7 | 16.8 | 256 | 3.2 | 809 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! I hope to see the revised version of this paper later.
Please remember to add an explanation of the SE(3) network in your paper, either in the main paper or appendix. That would be more helpful in understanding the paper. | Summary: This paper proposes a new perspective for antibody design-- incorporating the energy factors aiming to minimize the overall energy of designed sequence and structure. It involves diffusion to maintain the sequence-structure co-design. Towards the variance of energy factors, the paper proposes the idea of gradient surgery to simplify it. The paper conducted extensive experiments, showing its performance towards other baselines. The paper is easy to follow.
Strengths: * The paper proposes an energy-based preference optimization approach to achieve better rationality and binding affinity, which is inspiring in the field of antibody design.
* The paper decomposites the energy factors, simplifying the gradient calculation process.
* The paper achieved significantly lower energy than other approaches, reflecting the effectiveness of the model design.
* The paper clearly states why it chooses energy as the main evaluation metric with strong statistical evidence.
* The paper provides a detailed sample-level comparison of existing baseline approaches, which is a solid solution and beneficial to successive works.
Weaknesses: * Although the authors stated why they chose energy as the main metric, the AAR is about 10% lower than dyMEAN. The authors did not provide a solid reason for what caused the result (Are all the lower results considered hacked or biased? Maybe a sample-level analysis would be more convincing.)
* The derivation of the final loss for fine-tuning could be simplified.
* The adaptation of RLHF into this approach should be further clarified.
Technical Quality: 3
Clarity: 2
Questions for Authors: * This paper relies heavily on energy function and sidechain packing methods in pyrosetta. Would it be helpful to directly train the diffusion model from scratch by optimizing the loss function using pyrosetta?
* There are several other tasks except for CDR-H3 design, as the paper aims to optimize the energy of the complex, would it be helpful to test the method with other downstream tasks that may be closely related to the energy(refer to dyMEAN)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As stated in the article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback! We address your questions as follows.
**Q1: AAR is about 10% lower than dyMEAN. The authors did not provide a solid reason for what caused the result.**
A1: As mentioned in Appendix A, AAR is easily hacked and conceals numerous issues. Certain intrinsic patterns within CDR-H3 sequences enable the model to achieve a seemingly satisfactory AAR by memorizing these patterns (also noted in the Limitations section of dyMEAN). However, this leads the model to produce highly similar sequences for all antigens, which is highly impractical. The possible reason is the severe lack of data, causing the model to merely learn the marginal distribution of residue types at each position. dyMEAN is a typical example of this scenario. To illustrate the sequences generated by dyMEAN with over 40% AAR, we reproduced dyMEAN, generated CDR-H3 for 55 antigens, and visualized the sequences. As shown in Fig. 2 in the uploaded pdf, through alignment (aligned and visualized by MAFFT[1]), it is evident that dyMEAN generates almost identical sequences, while AbDPO+ is considerably better. In this case, AAR is not a meaningful measure.
As for why AbDPO did not achieve a high AAR, we believe it is due to the different learning objectives between AbDPO and the baselines. The learning objective of baselines is to generate antibodies consistent with natural antibodies in terms of sequence and structure. Therefore, they perform better on metrics like AAR, which measures similarity to natural ones. However, AbDPO's learning objective is to better meet specific preferences, while not optimizing for consistency with natural ones. Consequently, its AAR is lower than dyMEAN. Nevertheless, sequence pLL leads to a significant improvement in AAR (31.25% to 36.27%) when used as an optimization objective.
Reference:
[1] Madeira, Fábio,et al. "The EMBL-EBI Job Dispatcher sequence analysis tools framework in 2024." Nucleic Acids Research.
**Q2: The adaptation of RLHF into this approach should be further clarified.**
A2: We can consider the antibody generation model obtained through conventional training methods as a pre-trained model. However, this pre-trained model does not meet our requirements, such as physical energy. This situation is similar to the challenges faced in current generative AI, where pre-trained models often require further alignment to match human preferences. Therefore, we can leverage methods like RLHF to align the antibody generation models with the desired properties, essentially optimizing the model. In AbDPO, we optimize the pre-trained model using such methods, allowing us to generate antibodies that satisfy multiple preferences.
**Q3: Would it be helpful to directly train the diffusion model from scratch by optimizing the loss function using pyrosetta?**
A3: Directly training the diffusion model from scratch is not practical.
The reasons are twofold:
1. Since the loss function using pyrosetta is not differentiable, RL algorithms can be applied, such as policy gradient methods. In this case, the antibody design problem is formulated as a high-dimensional non-linear decision making problem. The search space for this problem is large and involves both continuous and discrete variables. Directly training from scratch is hard due to its instability. Specifically, useless efforts might be made, especially at the beginning of the training, when the policy can only generate samples with unsatisfactory rewards. A feasible solution to address this challenge is to use a pre-trained policy that can significantly prune the search space and accelerate the convergence rate, which has been validated on many applications, such as [1] and [2]. And this solution is the pretraining-finetuning paradigm that we utilized in our work. The pre-trained diffusion model can be viewed as an expressive policy.
2. Another reason is that low pyrosetta energy is not our only purpose. We aim to generate valid antibodies with low energy. Some intricate properties cannot be explicitly formulated as rewards but can be learned from data by generative modeling. Our optimization objective is equivalent to maximizing rewards with regularization with reference model. The regularization term keeps knowledge learned from data, which cannot be achieved by training from scratch by optimizing the energy function.
References:
[1] Silver, David, et al. Mastering the game of go without human knowledge. Nature
[2] Haarnoja, Tuomas, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ICML.
**Q4: Would it be helpful to test the method with other downstream tasks that may be closely related to the energy**
A4: We test our method on the affinity optimization. Specifically, we test AbDPO, along with its counterpart, DiffAb by evaluating the best $\Delta\Delta G$ against the reference antibodies among 300 generated samples using [1] on 16 randomly selected antigens in the test set. **Note that we do not use the metric as the signal in fine-tuning**. The results are shown as follows:
|pdb id|DiffAb|AbDPO|
|--|--|--|
|1a14|-3.91|**-5.46**|
|1n8z|-5.23|**-6.80**|
|1w72|0.34|**-1.82**|
|2cmr|-3.12|**-3.62**|
|3bn9|-2.44|**-4.42**|
|3mxw|-4.12|**-5.45**|
|3rkd|-6.21|**-6.99**|
|4dvr|-5.28|**-5.77**|
|4fqj|**-4.57**|-4.39|
|4g6m|-5.94|**-5.97**|
|4ki5|-3.17|**-4.83**|
|4xnq|-6.78|**-7.71**|
|5bv7|-6.19|**-7.66**|
|5en2|-3.33|**-5.54**|
|5f9o|-5.10|**-5.32**|
|5nuz|-2.49|**-5.28**|
Our results significantly outperform DiffAb, which shows the generalizability on other tasks. *We believe that our method can perform even better if we consider the metric itself, ie., predicted $\Delta\Delta G$ against the reference antibodies, in our preference definition when fine-tuning*.
Reference:
[1] Shan Sisi, et al. "Deep learning guided optimization of human antibody against SARS-CoV-2 variants with broad neutralization." Proceedings of the National Academy of Sciences
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. My major questions have been solved and I increased the score. | Summary: The paper proposes an approach for fine-tuning diffusion models for the design of antibodies. The core diffusion-based generative model comes from Luo et al. [36] and to my understanding there are no technical changes to it. The second component is direct preference based optimization, inspired by fine-tuning of large language models. This is the main technical contribution and builds entirely on the works by Rafailov et al [41] and Wallace et al [46]. The reward signal comes from binding free energy that is decomposed at the residue level. Different components include attraction and repulsion forces that can be linked to antibody function. To overcome possibly diverging gradients associated with different energy components, the authors propose to leverage gradient surgery from Yu et al [51] that essentially increases cosine similarity between gradients for different tasks/energy components.
Empirical evaluation is focused on qualitative aspects and whether the approach is able to discover better binders than initially given complexes, measured using binding free energy.
Strengths: I think it is an interesting approach to merry molecular simulations and physics based energy calculations with diffusion and generative models. Especially given the small number of available crystal structures in the SAbDAB database.
Adding energy-based signal via direct preference optimization is an interesting re-purposing of that method.
Empirical evaluation goes beyond amino-acid recovery rate and RMSD metrics. The “success” at generating better binders quantified via improvement in binding free energy relative to the initial complex is an interesting metric.
Weaknesses: Table 1 indicates that the approach is able to design better binders, measured via binding free energy relative to the initial complex. However, this comes at the expense of increasing the number of hydrophobic residues which is typically associated with non-specific binding. This would in all likelihood be useless binders and from the perspective of function no better than baselines.
It would be interesting to see the results of a baseline that “fine-tunes” relative to ddG score directly. In my understanding, the results in Table 1 are vanilla baselines. None of them (e.g., MEAN, dyMEAN, HERN) has for instance been used in combination with iterative improvement algorithm and some physics based simulator. How would this compare to fine-tuning relative to the directed preference optimization?
It would be interesting to see the results relative to different physics-based simulators. I’m not sure how many different simulators were used to generate “fine-tuning” signal?
Would it be possible to include additional metrics such as lDDT, TM, and some metrics characterizing the fit on angles?
Technical Quality: 2
Clarity: 3
Questions for Authors: Could you also list how many mutations from the original CDR sequence are the improved binders (Table 1)?
Can you share the structural similarity scores between epitopes in the train and test folds?
If you limit the fraction of hydrophobic residues (PHR) to the value for the original complex and accept only designs with lower score, how does Table 1 look? Can you list relative to how many test fold complexes each method achieves better binding free energy (e.g., MEAN wins on N, AbDPO wins on 20, etc.)?
How does Table 1 look like if you apply ITA to the vanilla baselines?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. Please see below for our responses to the comments.
**Q1: The approach can design better binders, but comes at the expense of increasing the number of hydrophobic residues which is typically associated with non-specific binding.**
A1: This is exactly why we included PHR as an optimization objective in AbDPO+. We will show the performance of each method when limiting the PHR in A6, and AbDPO still performs better.
**Q2: It would be interesting to see the results relative to different physics-based simulators. I’m not sure how many different simulators were used to generate “fine-tuning” signal?**
A2: All the three energy signals were calculated using Rosetta, which is capable of both energy calculation and side-chain packing, and is widely used by researchers. In early stages, we also tried using OpenMM for energy calculations and obtained consistent results.
Additionally, we used OpenMM to calculate the potential energy of the antigen-antibody complex where CDR-H3 is generated by AbDPO fine-tuned according to Rosetta energy. For 6 complexes tested, AbDPO achieved lower energy in 5 compared to DiffAb. (the remaining 49 complexes raise errors in OpenMM workflow when adding Hydrogens, and are not influential in Rosetta workflow that does not require Hydrogens)
**Q3: Would it be possible to include additional metrics such as lDDT, TM, and some metrics characterizing the fit on angles?**
A3: For the antibodies generated by all methods, we measured both the TM-score[1] on the heavy chain (Hc_TM-score) and the torsion performance (Torsion-score) of residues on the CDR-H3. For Hc_TM-score, there is not much difference in methods other than HERN. The Torsion-score is derived from a two-dimensional KDE function, which is based on the joint distribution of Phi and Psi torsions observed in natural CDR-H3s. Diffusion-based methods, DiffAb and AbDPO, perform significantly better than other methods.
| |Hc_TM-score|Torsion_score|
|--|--|--|
|HERN|0.92|0.14|
|MEAN |0.98|0.34|
|dyMEAN|0.98|0.68|
|DiffAb|0.97|0.98|
|AbDPO|0.97|0.86|
|AbDPO+|0.98|0.90|
|AbDPO++|0.97|0.89|
Reference:
[1] Zhang, Yang, and Jeffrey Skolnick. "TM-align: a protein structure alignment algorithm based on the TM-score." Nucleic acids research.
**Q4: Could you also list how many mutations from the original CDR sequence are the improved binders (Table 1)?**
A4: Yes, we calculate the minimum number of mutations of the generated samples that exhibit energy levels close to the natural samples (i.e., the "successful" samples in Table 1) and the corresponding CDR lengths.
|pdb_id|CDR_length |N_mut_min|
|--|--|--|
|1a14|15|7|
|1ic7|7|3|
|1iqd|10|7|
|2b2x|12|6|
|2dd8|11|3|
|3bn9|9|8|
|4ffv|10|6|
|4qci|13|8|
|5d93|9|6|
**Q5: Can you share the structural similarity scores between epitopes in the train and test folds?**
A5: We follow the metric widely used in protein structure design tasks to calculate the structural similarity scores. The similarity between two protein sets is defined as the average TM-score of each sample in one set to its most similar protein in another set. The similarity between the train and test folds is 0.6965. Following MEAN, the epitope is defined as the 48 residues of the antigen closest to the paratope.
Additionally, we believe there is a misunderstanding here. The training data only appears during the pre-training phase, while in the optimization phase, we only used synthetic data.
**Q6: If you limit the fraction of hydrophobic residues to the value for the original complex and accept only designs with lower score, how does Table 1 look?**
A6: We perform an evaluation only on the samples that contain hydrophobic residues not exceeding the natural one, and the results are shown below. It can be seen that under this setting, the energy performance of almost all methods has deteriorated, but AbDPO and AbDPO+ still perform the best in terms of the two energies.
| | AAR| CDR $E_{\text {total}}$ | CDR-Ag$\Delta G$ | pLL| PHR|
|--|--|--|--|--|--|
|HERN| 32.30% |11953.56|1949.35|-1.95|25.81%|
|MEAN| 37.42% |8127.87|1412.02|-1.93|23.85%|
|dyMEAN| 38.33%|6253.15|2906.5|-1.94|31.84%|
|DiffAb|34.47%|2129.9|1646.6|-2.14|27.22%|
|AbDPO|31.24%|907.18|453.71|-2.25|31.24%|
|AbDPO+|32.91%|1464.53|815.14| -2.09|32.91%|
**Q7: Can you list relative to how many test fold complexes each method achieves better binding free energy?**
A7: Sure. For each complex, if a method achieves the best performance, it was considered a "win" for that method on that specific complex. We record the number of "win" complexes, N_win. We also observe that there are instances where none of the samples have a PHR lower than that of the natural sample. Therefore, we also list the number of complexes, denoted as N_phr, for which the method can generate samples with a PHR not exceeding that of the natural sample. The specific results are as follows:
| | N_win | N_phr |
|--|--|--|
|HERN|8|35|
|MEAN|2|32|
|dyMEAN|3 | 12|
|DiffAb| 1 | 38|
|AbDPO| 20 | 36|
|AbDPO+| 18| 52|
**Q8: How does Table 1 look like if you apply ITA to the vanilla baselines?**
A8: We follow the ITA setting from the MEAN codebase and implement ITA for MEAN, dyMEAN, and DiffAb. We adapt $\text{CDR}E\_{\text{total}}$ and $\text{CDR-Ag}\Delta G$ as two targets to gather high-quality candidates for each antibody. We observe that directly generating high-quality candidates can negatively affect other preferences. This further demonstrates the effectiveness of AbDPO in optimizing multiple objectives simultaneously.
| | AAR | RMSD | CDR $E_{\text {total}}$ | CDR-Ag$\Delta G$ | pLL | PHR | N_success|
|--|--|--|--|--|--|--|--|
| MEAN |32.56% |2.19| 4731.58 | 363.26 |-1.83 |75.12% |0 |
| dyMEAN |39.69% |2.00|6105.83 |1665.72 |-1.63 |42.09% |0 |
| DiffAb |35.93% | 2.15 |1288.67 |814.93 |-1.82 |59.57% |0 |
| ABDPO| 31.25% |1.98 |629.44 |307.56 |-2.18 |69.67% |9 |
| ABDPO+ | 36.27% |2.01 |1106.48 |637.62|-2.00 |44.21% |5 |
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I’m satisfied with the response and will be increasing my score. Well done! | Summary: This paper applies direct preference optimization to antibody design. Specifically, it uses Rosetta binding energy to guide a pre-trained diffusion model to generate antibody CDR structures with low binding energy.
Strengths: * Optimizing antibody binding energy is an important problem.
* The proposed gradient surgery procedure is technically interesting.
Weaknesses: * To evaluate binding energy using Rosetta, it is necessary to run side-chain packing and energy minimization to clean up the predicted structure. Therefore, it usually takes couple of minutes to evaluate binding energy using Rosetta for just one structure. In other words, it is computationally expensive to guide diffusion models using Rosetta.
* Rosetta side-chain packing is stochastic and non-deterministic. Therefore, if we relax the structure generated by diffusion model multiple times, the calculated Rosetta energy will be very different, and the standard deviation can be very high (sometimes it can be twice or three times higher than the mean). In other words, it is very tricky to construct a preference dataset because you need to compare the binding energy distribution between two CDR sequences, and their standard deviation is very high.
* Despite the high standard deviation, the reported binding energy in this paper does not have standard deviation. It seems like the authors only calculate Rosetta binding energy once for each structure instead of running side-chain packing or energy minimization multiple times and take the average. Therefore, the reported results may not be statistically significant.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Can you report your model performance by running Rosetta relaxation multiple times with different random seeds and report standard deviation of the reported binding energy for each method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Please see below for our responses to the comments.
**Q1: It is computationally expensive to guide diffusion models using Rosetta.**
A1: Yes, the computational expense is indeed a drawback of Rosetta. We are aware of Rosetta's limitations and have discussed them in the appendix. However, despite these issues, the time cost of optimizing AbDPO for a single antigen is less than a day. Additionally, AbDPO is not specific to Rosetta. For example, other packing tools (like DiffPack) and energy calculation methods (such as OpenMM, which is significantly faster than Rosetta) can also be used. In fact, AbDPO supports optimization of any type of property, and we chose Rosetta because it can perform both packing and energy calculations and is widely used.
**Q2: Rosetta side-chain packing is stochastic and non-deterministic. Therefore, if we relax the structure generated by diffusion model multiple times, the calculated Rosetta energy will be very different, and the standard deviation can be very high (sometimes it can be twice or three times higher than the mean). In other words, it is very tricky to construct a preference dataset because you need to compare the binding energy distribution between two CDR sequences, and their standard deviation is very high.**
A2: We recognize that the same CDR-H3 (with the same sequence and structure) may result in different side-chain conformations after packing, and there will be energy differences between these side-chains. We observed this in our early exploration but still only performed the side-chain packing for each sample once. The reasons are as follows :
- The differences between different samples generated by existing methods (such as DiffAb) are generally much larger than the energy differences between different side chain conformations of the same sample. For natural antibodies, total energy is relatively low due to their reasonable structure, so different side chains may have a significant impact on total energy. However, for generated antibodies with several clashes, the influence of side-chain conformations is not significant as the total energy could be extremely high.
- Repeating the packing process on the same sample can indeed yield more accurate energy labels, but it will consume more time and we do not want the excellent performance of AbDPO to be based on a large amount of meticulously selected data. Although sampling only once may lead to incorrect judgments of win-lose relationships between two samples, this usually occurs when the two samples are quite similar. As long as the better sample has a higher probability of being sampled at lower energy, the model will generally be optimized toward generating antibodies with lower energy.
- DPO is derived based on the Bradley-Terry model. In the Bradley-Terry model, the labels of winner and loser are not deterministic but exist in probabilistic form. We perform packing for each sample only once, which is similar to sampling winner and loser once. This is also theoretically justifiable.
Furthermore, we use 1a14 as an example and sample 100 synthetic data from the 1a14 training dataset. For each synthetic sample, we repeat packing 128 times with different random seeds. Then we can compare the deviation between different synthetic data and different sidechains from the same synthetic data. The results are shown in Fig. 1 in the uploaded pdf, and the deviation between different synthetic data (the red line) is far greater than the deviation between different sidechains from the same synthetic data (the blue violin plot indicates the distribution of deviation within each synthetic data). This further supports our justification.
**Q3: Can you report your model performance by running Rosetta relaxation multiple times with different random seeds?**
A3: We repeat the optimization process 32 times with different random seeds for each AbDPOw/O generated sample. The average $\text{CDR} E_{\text{total}}$ and $\text{CDR-Ag} \Delta G$ of all generated antibodies are 65.19 and -2.67, which are slightly different from the reported values of 69.82 and -3.00. Therefore, the experiment results reported in our paper are reliable. The reason we did not report the averaged results over multiple runs is due to time cost considerations.
**Q4: Can you report standard deviation of the reported binding energy for each method?**
A4: Thanks for your reminder, we provide the standard deviation of two energies for each method below. It could be seen that significant deviations are commonly present in the results generated by each method, and the deviation of AbDPO is smaller than that of DiffAb.
| | CDE$E_{\text{total}}$ avg | CDE$E_{\text{total}}$ std | CDR-Ag$\Delta G$ avg | CDR-Ag$\Delta G$ std |
|----------|-----------------|-----------------|---------------|---------------|
| HERN | 10887.77 | 1313.08 | 2095.88 | 1051.85 |
| MEAN | 7162.65 | 2421.58 | 1041.43 | 1322.44 |
| dyMEAN | 3782.67 | 482.51 | 1730.06 | 544.91 |
| DiffAb | 1729.51 | 883.14 | 1297.25 | 1016.96 |
| AbDPO | 629.44 | 446.02 | 307.56 | 378.59 |
| AbDPO+ | 1106.48 | 678.9 | 637.62 | 631.99 |
| AbDPO++ | 1349.39 | 754.61 | 747.89 | 692.09 |
| AbDPOw/O | 69.82 | 59.17 | -3 | 6.61 |
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your response. I think the general methodology is applicable to properties other than rosetta. The new experiment shows the standard deviation and I encourage the authors to put them into the final paper. I have increased my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback. We will be sure to incorporate the deviation and other discussions into the final version. We greatly appreciate your recognition of our paper's soundness, presentation, and contributions, as reflected in your ratings for each of them. We would like to kindly remind you that a score of 5 is considered borderline and is advised to be used sparingly according to the review guidelines. We would really appreciate it if you could consider raising your score. Regardless of your final rating, we are sincerely grateful for your valuable suggestion and continued support. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for your constructive feedback. We placed Figure 1 and Figure 2 in the PDF file.
Pdf: /pdf/1c5dfaf432d4c963cf05fb886437fcf7c3123c7d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DeepITE: Designing Variational Graph Autoencoders for Intervention Target Estimation | Accept (poster) | Summary: The paper introduces a deep learning framework for identifying intervention targets in causal systems by amortizing the problem, although assuming that the causal graph is known. The framework, called DeepITE, employs a variational graph autoencoder (VGAE) that learns from both labeled and unlabeled data across different intervention targets and causal graphs. The model can quickly identify intervention targets in new samples without the need for retraining, addressing the inefficiencies of current methods which require frequent re-estimations with minor data changes. DeepITE is validated through comprehensive testing against 13 baseline methods, showing superior performance.
Strengths: Originality
The proposed variational graph autoencoder (VGAE) framework tailored for the task of Intervention Target Estimation (ITE) is novel, to the best of my knowledge. The authors propose to repurpose some of the VGAE framework for GNN-DAG for the purpose of learning the intervention target, so the approach is not entirely novel, but their application of it to the ITE problem definitely is.
Quality
The methodology is sound and well explained. As far as I can tell, the experiments are well-designed. The method is validated against a variety of datasets and baseline methods. The empirical results convincingly support the superiority of DeepITE over existing models, particularly in terms of scalability and efficiency in handling large and complex graphs.
Clarity
The paper is well-written.
Significance
This work is significant, it may be important for root-cause analysis in systems where the causal graph is known (which is not often the case).
Weaknesses: 1. The authors should make it explicit from the introduction that their framework assumes that the graph is known (but not the full SCM). As far as I understand, this is not the case in Yang et al. [7]. I believe the manuscript would be clearer if it had a table summarizing the different efforts for RCA and ITE, explaining for each paper what they assume (wrt knowledge of causal model, nature of intervention, max number of nodes targeted, and any other relevant assumptions), so that a reader may situate the work more carefully.
2. The claim that the model can handle soft interventions seems uninstantiated. I quote "A probability of gamma_i = 0 being one indicates a hard intervention, whereas any other value suggests a soft intervention". Given that there are many different layers of complexity at play here, I do not believe this is a reasonable claim. For example, this is a clear instance of model misspecification, in which I do not expect there there would be consistency. For certain type of soft interventions, it is plausible that the proposed model (based on hard interventions) would erroneously predict the set of intervened nodes. Additionally, the model's inference strategy uses variational approximations, and differentiable relaxation of discrete distributions, which makes it very hard to assess whether "probability of gamma_i = 0 [equals] one".
3. I wish to have seen some form of consistency theory for the proposed method. So far it has little to no theoretical backing, even though I understand that the authors have explained well why the generative model has a nice causal semantic.
4. A main advantage of the method is to amortize across different settings, but I am not sure this claim is perfectly justified. For example, DeepITE mix performs worse than the sep variant and it performs worse than CI-RCA in table 1 (unlike written in the text!). Additionally, the authors didn't quite explain how many graphs were put together, and I didn't see in the main text the existence of an ablation study to show that with more graphs, the method performs better. I wish to have seen a assessment of the amortization to justify that it actually works.
5. It would be great if the authors could explain when they expect the model to perform well, or not, and which components are helpful for its empirical success.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you clarify whether the framework assumes the causal graph is known a priori, as this seems to differ from approaches like Yang et al. [7]? Would it be possible to include a table summarizing key assumptions of different RCA and ITE efforts for clearer comparative context?
- Can you substantiate the claim that the model can handle soft interventions, given the complexities associated with different types of interventions and the potential for model misspecification?
- Is there any consistency theory or additional theoretical backing for the proposed method, especially regarding its generative model and causal semantics?
- How do you justify the claim about the model's ability to amortize across different settings, particularly given the mixed performance of DeepITE in various configurations as noted in your results?
- Could you discuss under what conditions the model is expected to perform well, which components significantly contribute to its empirical success, and any limitations in its current design?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Q1 summarizing table_
We agree that LIT does not require a given graph and will mention DeepITE requires a given graph explicitly in the introduction. In response to your suggestion, **we have provided a comparative analysis of the assumptions inherent in various methods in Table R1 of the PDF attached to our global response.** This table highlights that the primary advantages of DeepITE are its scalability and the flexible amortization during inference, allowing for adaptation without the need for retraining on new graphs and intervention targets. It also has limitations: it struggles to address confounders, and requires the graph structure as an input, despite its capacity to learn structural equations. **These two limitations have been mentioned in Appendix H (Page 20).**
_Q2 soft interventions_
Thank you for your valuable feedback! We have examined DeepITE's performance with various proportions of hard and soft interventions. Soft interventions are modeled by replacing the linear structural equations related to intervention targets with quadratic ones. The results, presented in Figure R1(c) of the attached PDF, show that **DeepITE is robust to different mixtures of hard and soft interventions, demonstrating its capability to handle both types effectively.** In addition, as detailed in Appendix G.3 on Page 18, we conduct further experiments based on classical ITE settings that also involve soft interventions. We will mention "soft interventions" explicitly in that section. The results, showcased in Table 5 in our paper, reinforce **the superiority of DeepITE over other baseline models in these scenarios, further confirming its effectiveness in dealing with soft interventions.**
_Q3 consistency_
In our paper, we primarily demonstrate that the decoder, as defined in Eq (5), can address interventional queries by modifying the adjacency matrix $A$ through graph surgery (see Propositions 1-3), since **this mechanism lays the foundation for deriving DeepITE for ITE**. Whether the estimated set of intervention targets is consistent with the ground truth relies on the properties of the amortized variational inference framework we use.
On the other hand, we notice that **the consistency of VGAE for causal inference (the inverse process of ITE, see Lines 158-162) has been proven in [R4]**. Due to the time limit, we are unable to prove the consistency of DeepITE at this moment, but **will highlight this limitation in Appendix H (Page 21), identifying it as a promising area for future research:**
> Finally, proving the consistency and identifiability of DeepITE, and more broadly in the application of VGAEs for ITE, remains an interesting avenue for future work. Notably, such theoretical guarantees have been established for VGAEs in the context of causal inference (both observational and interventional) in [R4].
_Q4 amortization_
Thank for your constructive suggestion! We have investigated the performance of DeepITE as more graphs with different sizes are trained together. Specifically, we begin with 100 graphs, each containing 50 nodes and 1000 samples, and assess the model's performance on testing graphs containing 50 nodes. We subsequently introduce additional groups of graphs: 50 graphs with 100 nodes, followed by another 50 graphs also with 100 nodes, and finally 50 graphs with 500 nodes, followed by another 50 graphs also with 500 nodes.
As detailed in Figure R1(e), our findings indicate a gradual degradation in the performance of DeepITE (mix) as we incorporate more graphs of varying sizes, attributed to the amortization error. However, **this reduction in performance is minimal.**
Moreover, the results in Table 1 of our original paper show that while DeepITE (mix) performs slightly worse than DeepITE (sep), it even outperforms DeepITE (sep) training exclusively on 100-node graphs in terms of Recall@1. Based on this evidence, **we maintain that the amortization process across graphs does not significantly hinder the performance of DeepITE.**
_Q5 performance boundary & components contribution_
Thank you for your constructive feedback! We have conducted additional ablation studies to investigate the performance of DeepITE under various conditions, specifically focusing on graph size, the number of interventions, and sample size. The findings are illustrated in Figure R1 in the attached PDF. Our results indicate that the **performance of DeepITE gradually declines as graph size and the number of interventions increase, and as sample size decreases, which aligns with our expectations.**
To evaluate the contributions of different components within DeepITE, we performed a detailed ablation study, the results of which can be found in Appendix G.6 (Page 20). Our analysis revealed that the **specific designs of both the encoder and decoder play critical roles in the model's empirical success.**
Lastly, **we have outlined the limitations of DeepITE in Appendix H (Page 20).**
[R4] Zečević et al. Relating GNN to structural causal models, 2021.
---
Rebuttal 2:
Title: A follow-up message about the rebuttal
Comment: Dear Reviewer HUbk,
We wanted to kindly check in on the status of the rebuttal, as there are 2 days remaining for the rebuttal period. Please let us know if there is anything else we can provide to contribute to our work. We would greatly appreciate it if you could provide your insights at your earliest convenience.
Thank you for your time and consideration.
Best regards,
Authors of DeepITE
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for their detailed response, and the additional experiments. I do not have additional questions, and will consider changing my score while discussing with other reviewers.
---
Reply to Comment 2.1.1:
Title: Reply to Reviewer HUbk
Comment: Thank you very much for your positive feedback. We greatly appreciate your time and thoughtful review. | Summary: The paper presents DeepITE, a novel deep learning framework designed for Intervention Target Estimation (ITE) in complex systems. DeepITE addresses these issues by employing a variational graph autoencoder (VGAE) that can learn from both unlabeled and labeled data across various intervention targets and causal graphs. The framework is capable of self-supervised and semi-supervised learning, allowing it to identify intervention targets without the need for retraining with new instances. It demonstrates improved performance in Recall@k metrics and faster inference times, especially for large graphs.
Strengths: 1. **Originality**: DeepITE addresses a novel problem formulation in the context of causal analysis, focusing on estimating intervention targets in a manner that is not addressed by existing methods. Additionally, this method overcomes limitations of prior work by enabling collaborative learning across different instances and effectively incorporating labeled data, which was underutilized in previous approaches.
2. **Empirical Validation**: Extensive experiments are conducted, comparing DeepITE against 13 baseline methods on both synthetic datasets and real-world dataset, demonstrating its superiority in Recall@k metrics and inference time, particularly for large graphs.
3. **Presentation**: This paper clearly introduces the ITE problem and existing solutions, and elucidates the relationship between the proposed method and existing works (DAG-GNN and VACA), emphasizing the flexibility and effectiveness of DeepITE in handling graphs of different sizes and structures.
Weaknesses: While the paper presents a significant contribution with the introduction of DeepITE, there are several areas where the work could be improved towards its stated goals:
1. **Theoretical Depth**: The paper could benefit from a more thorough theoretical analysis, particularly in understanding the limitations of the VGAE approach compared to traditional causal inference methods. For instance, a deeper discussion on the assumptions made by DeepITE and how they compare to assumptions in established causal frameworks would be valuable.
2. **Scalability Analysis**: While the paper mentions the efficiency of DeepITE, a more detailed analysis of its scalability with respect to the size of the graph and the number of interventions would be beneficial. This could include profiling the computational complexity and memory usage for very large graphs.
3. **Intervention Types**: DeepITE is designed to handle interventions, but the paper could provide more details on the types of interventions it can effectively estimate. For example, it would be useful to know how the model performs with different mixes of hard and soft interventions.
4. **Confounding Factors**: The paper assumes the absence of confounders, which may not hold in real-world scenarios. Future work could explore extensions of DeepITE to handle potential confounding, possibly through the integration of domain knowledge or advanced causal inference techniques.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. **Identifiability**: Does DeepITE have decidability, and can it guarantee that the optimal set of intervention variables can always be recovered from the data?
2. **Sample size**: Since this method is based on deep networks, are there any specific requirements for the amount of data samples?
3. **Robustness to random initialization**: according to algorithm 1, this method need to be initialized by some random parameters. Does it robustness to the random initialization?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Q1 theory_
We acknowledge the importance of theoretical analysis. However, due to the time limit, **we instead provide a comparative analysis of the assumptions inherent in various methods in Table R1 of the PDF attached to our global response.** This table highlights that the primary advantages of DeepITE are its scalability and the flexible amortization during inference, without the need for retraining on new graphs and intervention targets (ITs). However, it struggles to address confounders, and it requires the graph structure as an input, despite its capacity to learn structural equations. **These two limitations have been mentioned in Appendix H (Page 20).**
_Q2 scalability_
Thank you for your insightful feedback! We have provided the performance of DeepITE as a function of the graph size and the number of interventions respectively in Figs R1(a-b) in the attached PDF. Our findings indicate that while performance in terms of Recall@1 declines as the graph size increases, Recall@5 remains stable (above 0.9), suggesting that the **true ITs consistently appear among the top 5 candidates, even for graphs with 1000 nodes—a size that is already considered quite large for causal analysis**. Moreover, as the number of interventions rises, DeepITE's performance decreases gradually; however, **even with 5 interventions, the Recall@k remains at 0.825**.
_Q3 intervention types_
Thanks for your constructive suggestion! We have examined DeepITE's performance with various proportions of hard and soft interventions. Soft interventions are modeled by replacing the linear structural equations related to intervention targets with quadratic ones. The results, presented in Figure R1(c) of the attached PDF, show that **DeepITE is robust to different mixtures of hard and soft interventions, demonstrating its capability to handle both types effectively.** In addition, as detailed in Appendix G.3 (Page 18), we conducted experiments based on classical ITE settings with soft interventions. We will mention "soft interventions" explicitly in that section. The results, shown in Table 5 in our paper, reinforce **the superiority of DeepITE over other baseline models in dealing with soft interventions.**
_Q4 confounders_
**We acknowledge the importance of addressing confounders as a critical future work in Appendix H (Page 21)**. In line with other ITE works, where algorithms are initially developed for scenarios without confounders [3] before being extended to handle confounders [6], we also plan to extend DeepITE for confounders in future work.
_Q5 identifiablity_
In our paper, we primarily demonstrate that the decoder, as defined in Eq. (5), can address interventional queries by modifying the adjacency matrix $A$ through graph surgery (see Propositions 1-3), since **this mechanism lays the foundation for deriving DeepITE for ITE**. Whether the optimal set of ITs can be detected relies on the properties of the amortized variational inference framework we use.
On the other hand, we notice that **the identifiability of VGAE for causal inference (the inverse process of ITE, see Lines 158-162) has been proven in [R4]**. Due to the time limit, we are unable to prove the identifiability of DeepITE for now, but **will highlight this limitation in Appendix H, identifying it as a promising area for future research:**
> Finally, proving the consistency and identifiability of DeepITE, and more broadly in the application of VGAEs for ITE, remains an interesting avenue for future work. Notably, such theoretical guarantees have been established for VGAEs in the context of causal inference (both observational and interventional) in [R4].
_Q6 sample size_
Thanks for pointing it out! We have depicted the performance of DeepITE as a function of the sample size in Figure R1(d) in the attached PDF. Here we choose 10 graphs for training and change the sample size from 25 to 1000 for each graph. Our findings indicate that **DeepITE is generally robust to variations in sample size, though a larger sample size can enhance its performance**. In particular, on graphs with 50 nodes, DeepITE achieves a Recall@1 of 0.812 and a Recall@5 of 0.984 even with just 50 samples for each graph (500 samples in total). This success may stem from the collaborative learning approach in DeepITE and the relatively few parameters in the GNN-based encoder and decoder. In contrast, traditional ITE methods [3,6,10] typically require thousands of samples for a single graph and intervention set to perform well. However, it is also important to note that **the performance of DeepITE declines rapidly when the sample size is extremely small (e.g., 25 samples per graph), which aligns with our expectations**.
_Q7 random initialization_
DeepITE is robust to random initialization. **Since each node in a graph is initialized randomly and all graphs are trained collaboratively**, the method can converge to optimal solutions regardless of the initial parameters. **We will mention this point in our revised paper.**
[R4] Zečević et al. Relating GNN to structural causal models, 2021.
---
Rebuttal 2:
Title: A follow-up message about the rebuttal
Comment: Dear Reviewer bhQC,
We wanted to kindly check in on the status of the rebuttal, as there are 2 days remaining for the rebuttal period. Please let us know if there is anything else we can provide to contribute to our work. We would greatly appreciate it if you could provide your insights at your earliest convenience.
Thank you for your time and consideration.
Best regards,
Authors of DeepITE | Summary: This paper proposes a deep learning approach for Intervention Target Estimation (ITE) which is an important problem in causal discovery and inference. The authors argue that traditional methods in this area can only independently process each instance and is computationally inefficient. To address these limitations, the authors propose a variational graph auto-encoder whose key components involve an encoder and a decoder that are specially designed to accommodate the properties of causal factorization/intervention. The authors also discuss how to instantiate the encoder and decoder with modern GNNs and use the variational inference for deriving the objective. Experiments on synthetic datasets and two real-world datasets verify the effectiveness of the model against several state-of-the-art baselines.
Strengths: The paper studies an important problem whose significance is embodied by extensive applications in various domains.
The proposed method seems interesting and technically sound. The proposed components are properly justified with theoretical analysis and explanations against potential alternatives. Also, comparison with related models is sufficiently conducted.
The experiments are solid and comprehensive. The improvements achieved over the state-of-the-arts look promising.
Weaknesses: The model section is not entirely clear, and some parts need further explanation.
The technical novelty is somehow weakened given that the direct usage of DAG-GNN in Sec 5.1.
The experiments lack qualitative analysis or case study to verify the model.
The presentation can be improved, especially the descriptions about prior art and the motivation in Sec. 1 (the third and fourth para are too long and involved).
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What precisely the semi-supervised learning is conducted using the model? How the labeled data is incorporated into the ELBO?
2. Can the authors provide more explanations why Recall is used as the only metric for evaluation?
3. Given the studied problem, maybe more qualitative analysis or case study are expected to further verify the model.
Minor:
1. The authors argue that GAT enables inductiveness and consider this as the advantage of GAT over other GNNs. This arguement seems problematic, since in the context of the proposed framework, the GAT can be in principle replaced by other off-the-shelf GNNs, such as basically GCN and SGC. Maybe more explanations or ablation studies are needed for jusitification on this design choice.
2. Some typos for reference:
line 106: "but this too suffers from scalability issues with large graphs"
line 109: "zeroes in on ITE"
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are properly discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive evaluation and encouraging feedback on our paper. We deeply appreciate your constructive comments and valuable suggestions.
_Q1 direct use of DAG-GNN_
DeepITE does not directly use DAG-GNN. **We have explcitly discussed the relation between DeepITE and DAG-GNN on Lines 260-270 on Pages 6-7.** In summary, the key distinction lies in their objectives: DAG-GNN is designed for causal discovery (learning the DAG structure), whereas DeepITE focuses on ITE. Although both models share some similarities as variants of VGAEs, their inference models and latent spaces differ significantly. Furthermore, **our ablation study in Appendix G.6 demonstrates the superiority of DeepITE over DAG-GNN for ITE tasks.**
_Q2 case study_
Thanks for pointing it out! We have included a case study as follows:
> We first present the causal graph in Figure R2. Recall that we focus on the RCA problem and we aim to identify and present the root causes of the system to users. Note that while we have labels for the root causes in our testing data, we only possess observations for the observable variables represented in the graph. Consequently, all methodologies employed can only localize observable variables as intervention targets (ITs), rather than directly identifying the root causes. For instance, RootCause 2 (a weak signal in marginal areas) can influence feature19, feature X, and feature Y. However, RootCause 3 also affects feature X. As a result, when a given method identifies either feature X as an IT, it becomes challenging to ascertain whether RootCause 2 is indeed the true root cause.
> Interestingly, **DeepITE model demonstrates superior performance on this dataset by effectively identifying features exclusive to a specific root cause.** We evaluated the performance of all methods on samples 1758, 1760, and 1093, and summarized the results in Table R2. For sample 1758, where the true root cause is RootCause 2, DeepITE identifies feature19 as the IT, thus it is evident that RootCause2 should be the root cause. In contrast, other methods identify either feature X or feature Y, making it difficult to pinpoint the true root cause. Similarly, for sample 1760, DeepITE identifies feature60 as the IT, which is also exclusive to the true root cause, RootCause 3. On the other hand, in the case of sample 1093, DeepITE selects both feature60 and feature19, enabling us to conclude that both RootCause 2 and RootCause 3 are relevant root causes, a finding that aligns with the ground truth.
_Q3 presentation_
To address your concern, we will break up the third and fourth paragraphs into two shorter paragraphs each (e.g., the third paragraph can be divided at Line 46). Additionally, we will provide an introduction to less well-known priors, such as Jeffrey's prior, in the appendix, which will further enrich the reader's understanding of the priors.**
_Q4 semi-supervised learning & labeled data_
As detailed in Lines 283-309 in Section 5.3 (Page 7), the labeled data can be utilized to train the inference network, enabling it to more accurately identify intervention targets. In particular, the term $q(\gamma_i|\mathbf{x}, A)$ is replaced by the ground truth $\gamma_i$ when computing the ELBO (Eq. (14)), and an additional term is introduced to maximize the log-likelihood $\log q(\gamma_i^*|\mathbf{x}, A)$.
_Q5 why use Recall_
As defined in 326-328 on Page 8, Recall@k measures th proportion of true intervention targets (ITs) that are successfully captured within the top k ranked candidates proposed by each method. When k = 1, our goal is to pinpoint the intervention targets based on the highest-ranked candidate. We prioritize Recall@k because, in practice, false positives can be eliminated through further analysis, while false negatives are irrecoverable as they get lost among the numerous true negatives. This metric is widely adopted in the literature [4, 9, 29]. **We will clarify this point in the revised paper.**
Furthermore, **we have incorporated additional metrics in our evaluations**, such as Root.Acc and Score in the ICASSP experiments (see Table 7), and MMD/MSE in the ablation study (see Table 8).
_Q6 why use GAT_
We acknowledge that any inductive spatial GNN can be used as the inference network in DeepITE, and **will mention this point in our paper**. However, **we choose GAT since it is more flexible compared to GCN and SGC** [R3]. This flexibility stems from GAT's ability to dynamically weigh the importance of different nodes, thus allowing the variational distribution given by the inference network better approximate the exact posterior distribution. **This advantage has been demonstrated in Appendix G.6 (Page 20), where we replace the GAT encoder with the encoder of DAG-GNN, a type of GCN [15], and show the benefits of using GAT.**
_Q7 typos_
We will fix the typos accordingly.
[R3] Veličković, et al, Graph Attention Networks, ICLR 2018.
---
Rebuttal 2:
Title: A follow-up message about the rebuttal
Comment: Dear Reviewer GccQ,
We wanted to kindly check in on the status of the rebuttal, as there are 2 days remaining for the rebuttal period. Please let us know if there is anything else we can provide to contribute to our work. We would greatly appreciate it if you could provide your insights at your earliest convenience.
Thank you for your time and consideration.
Best regards,
Authors of DeepITE
---
Rebuttal Comment 2.1:
Comment: Thanks for the discussions. Please incorporate the case study results and clarification into the paper.
For the different from DAG-GNN, the main theoretical results of this paper are from DAG-GNN. While this work applies the method to different problem settings, the technical novelty and contributions would be limited in this sense.
For Recall metric, the authors use different metrics for different datasets without enough justification. I understand what Recall means, but the rebuttal fails to properly justify why only Recall is used for Protein/Synthetic and the other metrics are used for ICASSP. This questions the robustness of the results.
The argument that GAT is inductive is not rigorously correct, since common GNNs (e.g., GCN and SGC) are all applicable for inductive learning. The inductiveness is not a unique advantage of GAT.
---
Rebuttal 3:
Title: Reply to Reviewer GccQ
Comment: We greatly appreciate your valuable suggestions and the time you've taken to provide detailed feedback. We will carefully incorporate the case study results and clarifications into our paper.
> For the different from DAG-GNN, the main theoretical results of this paper are from DAG-GNN. While this work applies the method to different problem settings, the technical novelty and contributions would be limited in this sense.
We would like to clarify that DAG-GNN **does not prove any of the theorems presented in our paper**, even within their problem settings. Instead, DAG-GNN provides two theorems specifically related to their proposed acyclicity constraints.
> For Recall metric, the authors use different metrics for different datasets without enough justification. I understand what Recall means, but the rebuttal fails to properly justify why only Recall is used for Protein/Synthetic and the other metrics are used for ICASSP. This questions the robustness of the results.
Thank you for raising this concern. The additional metrics used for the ICASSP dataset were selected due to the unique characteristics and requirements of this particular dataset. As a result, these additional metrics **could not be applied to to the Protein/Synthetic datasets** in our study.
As explained in our rebuttal, we prioritize Recall@k because, in practice, false positives can be eliminated through further analysis, while **false negatives are irrecoverable** as they get lost among the numerous true negatives. This approach ensures that critical detections are not overlooked, which is why Recall@k is emphasized for certain datasets in our study.
On the other hand, it’s important to note that Recall@k **inherently includes a ranking among all candidate intervention targets** (ITs). A higher Recall@1 indicates that the true IT is ranked first, which implies that, with an appropriate threshold, the precision will also be very high. This is another reason why we focused on Recall@k in our analysis.
> The argument that GAT is inductive is not rigorously correct, since common GNNs (e.g., GCN and SGC) are all applicable for inductive learning. The inductiveness is not a unique advantage of GAT.
Thank you for pointing this out. We agree that GCN and SGC are also inductive, as they, along with GAT, fall under the category of spatial GNNs rather than spectral GNNs, making them all applicable for inductive learning.
As mentioned in our rebuttal, we **choose GAT** not because only GAT is inductive, but **because it is more flexible compared to GCN and SGC** [R3]. This flexibility stems from GAT's ability to dynamically weigh the importance of different nodes, thus allowing the variational distribution given by the inference network better approximate the exact posterior distribution. This point is also validated in our experiments in Appendix A.6.
Once again, we sincerely thank you for your thoughtful feedback. | Summary: Given a causal graph, this paper describes an autoencoder-based approach specifically designed for the purpose of designing intervention targets. This algorithm aims to be data and computation-efficient by by-passing the task of having to recover the causal graph, and also by incorporating specific architectural designs for the auto-encoder.
Strengths: The main strengths of this paper are as follows:
1. Empirical results: the results of this paper seem to be very convincing, and there is comparison to many related baselines.
2. The method is concise and easy to implement/understand and relate to prior work.
Weaknesses: The weaknesses of this paper are as follows:
1. The code is not available -- the main contributions of this work are experiments and empirical evaluation, which are quite strong. In order to truly assess this work, code availability, while not the only factor, is very important.
2. Lack of clarity: a) There is lack of clarity in terms of the datasets and evaluations (see questions below); b) there is lack of clarity on the competing methods and how this method fits within the general framework. For example, it would be great if RCA, and the main ITE baselines can be formally described in mathematical terms through a simple example (either in the main text or the appendix).
3. Evaluation: the authors seem to not discuss a very trivial and intuitive baseline regarding prediction methods with sparsity such as Lasso (please see questions below).
Technical Quality: 3
Clarity: 2
Questions for Authors: I have a few questions for the authors:
1. What is the purpose of the noise variable \zeta, and why is it modeled as a separate variable from the \mathbf{u} variable? From my experience, the exogenous variables are simply independent noise terms that are provided as input to the functional causal model.
2. How do the authors know the true intervention targets of the protein dataset?
3. Have the authors considered the following baseline: given M interventional datasets and an observational dataset, label each point with label G corresponding to which interventional dataset it comes from. Then, train a sparse classification (feature selection) method such as sparse logistic regression to predict G from the features X when combining an interventional and an observational dataset. Then, the selected features will be the Markov Blanket of G. I was wondering what the author's thoughts are on this simple baseline.
4. In line 300, the authors mention: "Once trained, the inference model of DeepITE becomes equipped to evaluate individual new samples
against different causal graphs, directly deducing the intervention targets and thus circumventing
the necessity of retraining for each new scenario...."; could the authors elaborate on this point? My understanding is that the model assumes a given causal graph.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: From what I see, there is no discussion on this model's limitation in the text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Q1 code availability_
We are committed to open-sourcing the code upon acceptance of the paper.
_Q2 lack of clarity_
1. Datasets and Evaluations: We have provided descriptions of the datasets used, including their sources, preprocessing steps, and evaluation metrics in Appendix G (please refer to Appendix G.2-G.5)
2. Competing Methods: In Appendix B (pages 13–14), we have provided a review of the existing methods in XAI and RCA. **This section includes a concise explanation of how each method operates**. While we acknowledge that mathematical formulations can be rigorous, we believe that explaining these methods in plain language enhances clarity and comprehension. Additionally, **we have referred readers to Appendix B within the experiment section (see Line 316 on Page 7) for easy access.**
_Q3 observation noise_
DeepITE includes the observation noise variable $\epsilon$ that reflects uncertainty not present in the true SCM. In the true SCM, observed variables are deterministic functions of their exogenous variables and parent nodes via the structural equations (SEs). Since DeepITE does not have access to the true SEs or the distribution of the exogenous variables, it assumes the SEs can be approximated as $Dec(\mathbf{u}, \gamma, A) = f_2((I- A_\mathcal{I}^T)^{-1}f_1(\mathbf{u}))$ (Eq. (7)), with learnable $f_1$ and $f_2$. Thus, $\epsilon$ represents the uncertainty in the estimated observational distribution resulting from this approximation. Note that $\epsilon$ has also been used in VACA [21] for the same sake. **We have clarified this point on Lines 222-224 in Section 5.1.**
_Q4 protein dataset_
We clarify that the true intervention targets of the protein dataset are known because we utilize the dataset from IGSP [R1], which provides this information in Appendix E in their paper. Specifically, this datasets contain measurements of proteins and phospolipids under different interventional environments. In each environment, signaling nodes are inhibited or activated. Hence, **these sites form intervention targets.** This dataset has been previously employed in [3,6,10] for ITE. We will explicitly mention this in the dataset introduction in Appendix G.4.
_Q5 Lasso_
We respectfully believe that Lasso does not align with the scope of ITE as outlined Definition 1 on Page 4. First, **Lasso cannot incorporate the causal graph structure available in ITE**. The prediction G can be an arbitrary variable in the graph and there is no clear guideline on how to choose G given the observed data. Moreover, **Lasso is designed to identify correlations rather than causal relationships, and so it is typically used for undirected graphical models [R2]**. Consequently, Lasso may select all variables correlated with G, rather than correctly identifying the intervention targets that directly impact G.
Actually, Lasso has been used for RCA in [8], but only for **bipartite** directed graphs where causal and effect relationships between features and predictions have been provided. In contrast, our approach is designed to handle more complex causal graph structures where the predictions are unknown.
_Q6 given causal graph_
You are correct that DeepITE assumes a given causal graph (see Defintion 1 on Page 4). However, once trained, it can evaluate new samples on new causal graphs without needing to retrain. By directly inputting the sample and the new graph structure into the inference network, we can derive the intervention targets. This stands in contrast to classical ITE methods, which require retraining for each new causal graph and each new set of intervention targets, even if the graph remains unchanged.
_Q7 limitations_
We have included a discussion of the limitations in Appendix H (Page 21).
[R1] Wang et al, Permutation-based Causal Inference Algorithms with Interventions, NIPS 2017.
[R2] Meinshausen et al, High-dimensional Graphs and Variable Selection with the Lasso, Ann. Statist., 2006.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for detailed responses to my questions, and for addressing some of my concerns.
Regarding Q5:
- I think we may have a misunderstanding. If G is an auxiliary discrete variable (with discrete values denoting the interventional environment a data point came from), which has the same value for each point in the same interventional environment, a method like Lasso would indeed find all variables that are associated with G (it will yield the variables in the Markov Blanket of G, we can term them MB(G)). One can also assume that G is always a cause (by definition it cannot be an effect of any variable). Then, MB(G) will be either the direct effects of G, or parents of the direct effects. Then, one can think about using the known causal graph to distinguish these sets of variables.
Regarding Q6:
- Can the authors briefly elaborate why it is possible to generalize to new, previously-unseen, causal graphs when using their framework?
Thank you very much.
---
Rebuttal 2:
Title: A follow-up message about the rebuttal
Comment: Dear Reviewer q8ww,
We wanted to kindly check in on the status of the rebuttal, as there are 2 days remaining for the rebuttal period. Please let us know if there is anything else we can provide to contribute to our work. We would greatly appreciate it if you could provide your insights at your earliest convenience.
Thank you for your time and consideration.
Best regards,
Authors of DeepITE
---
Rebuttal 3:
Title: Reply to Reviewer q8ww
Comment: Thank you very much for your valuable feedback.
>Regarding Q5:
>- I think we may have a misunderstanding. If G is an auxiliary discrete variable (with discrete values denoting the interventional environment a data point came from), which has the same value for each point in the same interventional environment, a method like Lasso would indeed find all variables that are associated with G (it will yield the variables in the Markov Blanket of G, we can term them MB(G)). One can also assume that G is always a cause (by definition it cannot be an effect of any variable). Then, MB(G) will be either the direct effects of G, or parents of the direct effects. Then, one can think about using the known causal graph to distinguish these sets of variables.
Thank you very much for your clarification. We now understand that this is quite similar to the method implemented in [3], and we have compared DeepITE with it in our analysis.
>Regarding Q6:
>- Can the authors briefly elaborate why it is possible to generalize to new, previously-unseen, causal graphs when using their framework?
Sure. In simple words, after training the VGAE collaboratively using graphs with different sets of intervention targets, different structures, and even different sizes, the inference model (i.e., the encoder) in the VGAE successfully learns a mapping whose input is the observed data $\mathbf{x}$ and the adjacency matrix $A$ and output is the distribution of intervention indicator $\gamma$. As a result, when we replace the adjacency matrix $A$ by that of an unseen graph, the encoder can still output the distribution of the intervention indicator.
Thank you again for your thoughtful response, as they have provided valuable insights for improving our paper. | Rebuttal 1:
Rebuttal: **Global response to all reviewers**:
We sincerely thank all the reviewers for their valuable suggestions. We are delighted by the unanimous recognition of our work and appreciate the reviewers' positive feedback on the carefully designed network architecture and extensive experiments in DeepITE.
We have thoroughly reviewed each of the reviewers' questions and suggestions, and we are grateful for their patience and diligence. In response, we have **conducted additional experiments** to evaluate the performance of DeepITE as a function of (a) graph sizes, (b) the number of interventions, (c) the mixture proportion of soft and hard interventions, (d) sample size for each graph, and (e) the number of mixed graphs, added new analyses to address any lingering questions, and emphasized the significance of our work. We have also **included a summary table** that highlights the characteristics of all compared methods and **a detailed case study** to further illustrate the practical application and effectiveness of our approach.
In the following rebuttal, we address each reviewer's comments individually. The reviewer's comments are presented in italics, followed by our response. Quotations from the revised paper are included in markdown quotation mode. Unless otherwise specified, all references to pages, equations, sections, and bibliographic citations relate to the revised paper. Additionally, figures, tables, and citations prefixed with "R" (e.g., [R1]) are newly added citations in this rebuttal. All newly added images and a table are enclosed within a separate single-page PDF attached to this global response. We will incorporate the suggested revisions into the final camera-ready version to enhance the clarity and persuasiveness of our paper.
Once again, we would like to express our gratitude to the reviewers for their insightful feedback, which has helped us identify areas for improvement and refine our work. We welcome any further insights or concerns that would contribute to enhancing the paper according to the reviewers' perspectives.
Pdf: /pdf/e10e52ecd1869d887ef4535a8cdf7bee352b75ba.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Convergence of Loss and Uncertainty-based Active Learning Algorithms | Accept (poster) | Summary: This is a technical paper, whose subject of interest is the convergence of stochastic gradient-based learning algorithms which include a stochastic step size mechanism, whose value is allowed to be influenced by losses or other "uncertainty" related quantities that are computed at training time.
Their main theoretical results can be roughly broken into two categories based on the assumptions placed on the step-size mechanism. The first category is where the step size is a re-scaled Bernoulli random variable, taking values in $\\{0, \\gamma\\}$ in their notation, with $\\gamma$ fixed throughout but the probability of a non-zero step size (i.e., $z\_{t} = \\gamma$) can change depending on the data/parameter at each step in the training routine. They start with an argument centered around a monotonic loss function and linear binary classifiers, but also consider an "equivalent loss" type of strategy (like in Liu an Li '23), again where a convenient monotonicity assumption (here on $\\pi$) preserves convexity and aids in analysis. Their main bounds are in-expectation upper bounds on the loss incurred by the average of iterates generated using this Bernoulli step size.
The second category is similar, but allows the actual step size to be impacted by loss/gradient values in an "adaptive" way, while retaining a certain probability of step size 0. This combination of Bernoulli step sizes with an adaptive step size is what they call "Adaptive-Weight Sampling (AWS)", and they provide conditions to obtain upper bounds on the (empirical) objective function of interest (i.e., the average loss).
Their theoretical results are complemented by empirical analysis, in which they compare uniform random sampling (of points for SGD), "traditional" loss-based sampling, and their AWS approach (their Fig 1). This setup assumes loss access, i.e., this is not active learning. On the other hand, for active learning scenarios, a loss estimator needs to be plugged in; they consider the impact of the quality of such an estimator in their second batch of tests (their Fig 2).
Strengths: Overall, the paper is quite well-written and has a natural logical structure which is easy to follow. The authors have obtained a flexible set of conditions for evaluating SGD procedures with a stochastic loss-dependent step size mechanism, which appear to build quite directly upon the existing literature, which they are good about citing (e.g., Raj and Bach '22, Liu and Li '23).
The paper is a mixture of different efforts, some new convergence theory, a new proposed algorithm (AWS), plus formal/empirical analysis of this algorithm, and I think there is potential for this work to have an audience at a conference like NeurIPS.
Weaknesses: I am not familiar with the related technical literature, so I will not comment on the novelty or theoretical prowess required to obtain the results here.
I would personally highlight two main points I feel need improvement. The first point is that the narrative of this paper feels really bloated. To the best of my reading, all the talk of "active learning" in the title and throughout the paper is totally irrelevant to the entire paper, save for the last paragraph of section 4 plus Figure 2. Yes, there are obvious links between the procedure of interest here and active learning settings, but the core problem setting stands on its own just fine. There is no reason to structure the paper around active learning, it just makes things confusing and downplays the substantive results. I feel like I can say the exact same thing about "uncertainty-based" methods. The only uncertainty-related formulation I can find is Corollary 3.8. Having this is great, but why put uncertainty-based and loss-based methods on the same footing when writing the paper?
The second point is related to technical exposition. For the most part the work seems to be well done, but for a first-time reader, certain parts feel rushed and sloppy. I'll make a list of points I tripped up on in the following section.
Technical Quality: 2
Clarity: 2
Questions for Authors: Here are some points that caught my eye while reading the paper. Some are obvious typos, others are poor technical exposition.
- What is the difference with Loizou et al. (2021)? In line 64, the authors say their work is different *"as we consider convergence of SGD under sampling of points."* How is this different? It is unclear.
- Line 140: typo of $\\mathcal{Y} = \\{-,1,1\\}$, should be $\\mathcal{Y} = \\{-1,1\\}$.
- In the key bound of Theorem 3.1 (for example), what is expectation being taken with respect to? On the left-most side of the main inequality, $x$ and $y$ appear. One step to the right, and only $x\_{t}$ and $y\_{t}$ appear. Since there is no generalization analysis, I assumed expectation was with respect to $(x\_{1},y\_{1}),\\ldots,(x\_{n},y\_{n})$ and the randomness in the stochastic algorithm. Is this correct, or are $x$ and $y$ supposed to indicate test data?
- I found the statement on page 5 that *"algorithm (1) is an SGD algorithm with respect to an objective function $\\tilde{\\ell}$ with gradient..."* a bit troubling. Given a random sample of $(x,y)$, indeed $\\pi(x,y,\\theta)\\nabla\_{\\theta}\\ell(x,y,\\theta)$ is an unbiased estimator of $\\nabla\_{\\theta}\\tilde{\ell}(\\theta)$ as defined in (3), but in the case of constant-weight sampling, $z\_{t}$ is *not* equal to $\\pi(x\_{t},y\_{t},\\theta\_{t})$, but rather $\mathbb{E}[z\_{t}] = \\gamma\\pi(x\_{t},y\_{t},\\theta\_{t})$, correct? Perhaps the authors are just glossing over the $\\gamma$, but I think if the authors want to say that $\mathbb{E}[z\_{t}\\nabla\_{\\theta}\ell(x\_{t},y\_{t},\\theta\_{t})]$ equals the right-hand side of (3), it should be done a bit more precisely.
- What is the main difference between the second half of section 3.1 and the work of Liu and Li (2023)? Are they considering the same problem and just looking at one special case of $\\Pi$ and $\\pi$? Is the problem setting different? This is all unclear to me.
- Lemma 3.2: font for expectation is different from the other results ($\\mathbf{E}$ versus $\\mathbb{E}$).
Overall, there is a decent effort here, but I think the paper still needs a fair bit of polish.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a well-rounded summary of our results and for recognizing their potential interest to a NeurIPS-like community.
# Weaknesses
The review focuses on some presentation issues. First, it argues that focusing the paper's exposition around active learning may be confusing and downplays our substantive results (on sampling-based learning). Additionally, it notes that it is unclear why we cover both loss-based and uncertainty-based methods, with the latter discussed only in some parts of the paper. Second, the reviewer noted some technical notation points that require clarification.
The class of algorithms we study (projected SGD with stochastic step size), defined in Section 2, accommodates active learning and data subset selection algorithms by appropriate definition of the stochastic step size.
The framework of projected SGD with stochastic step size can accommodate different data sampling strategies, including those depending on loss and uncertainty criteria. Our theoretical results in Section 3 apply to active learning learning algorithms under assumption that the algorithm decides whether or not to query the label of a data point based on knowing the exact loss value. Our experimental results evaluate active learning algorithms that decide whether or not to query the label of a data point based on estimated loss value, which showed good conformance to algorithms using exact loss values. Our theoretical results in Section 3 apply to data selection algorithms which can observe the value of the label of each data point, and use this to compute the exact loss value. In our revision of Section 2, we will clarify the connection between active learning algorithms and the projected SGD with stochastic step size--see the concrete revisions we plan to make in Section 2 in our response to reviewer Reviewer ie4P.
As for why covering both loss-based and uncertainty-based methods in our paper, we note that our theoretical framework and results allow us to derive convergence rate bounds for both these methods. Uncertainty-based methods are discussed at several points in our paper and the associated appendix. Following Theorem 3.1 we comment that the proof technique allows to establish the convergence rate for margin uncertainty-based method considered in Raj and Bach [2022]. In Corollary 3.8, we identify an uncertainty-based sampling probability for which the convergence rate bound in Theorem 3.6 holds. Furthermore, in Appendix A.15, we show how this can be extended to a multi-class classification setting.
# Question 1
The difference is that, unlike Loizou et al [2021], we use *sampling* of points. In contrast, the algorithm by Loizou et al [2021] conducts SGD update with adaptive step size for every input point. Note that this difference is also precisely what makes our work focused on active learning, which focuses on efficient selection/sampling of points. __The comment titled "Proposed clarification for question 1"__ contains the proposed textual clarification for the introduction section. The changes to the problem statement that we propose in response to reviewer ie4P further clarify this.
# Questions 2 & 6
These are just typos -- we'll fix them.
# Question 3
In our paper, we consider the streaming computation setting where $(x_1,y_1), \ldots, (x_n,y_n)$ is a sequence of independent and identically distributed labeled data points with distribution $\mathcal{D}$. We will clarify this in the problem statement section (see the revised text in a response to Reviewer ie4P). In Theorem 3.1, $(x,y)$ is an independent sample of a labeled data point from $\mathcal{D}$. Will clarify this by revising the statement of the theorem as __shown in comment titled "Proposed clarification for Question 3"__.
Further details: in Theorem 3.1, we have
$\mathbb{E}\left[\ell(yx^\top \bar{\theta}\_n)\right] \leq \mathbb{E}\left[\frac{1}{n}\sum\_{t=1}^n \ell(y\_t x\_t^\top \theta\_t)\right].$
Since we consider a streaming algorithm and $(x_1,y_1), \ldots, (x_n, y_n)$ is a sequence of independent labeled data points, for every $t\in \{1,\ldots, n\}$, $(x_t,y_t)$ and $\theta_t$ are independent random variables as $\theta_t$ depends only on $(x_1,y_1),\ldots, (x_{t-1},y_{t-1})$. Hence, it follows that
\begin{eqnarray*}
\mathbb{E}\left[\frac{1}{n}\sum_{t=1}^n \ell(y_t x_t^\top \theta_t)\right] &=& \frac{1}{n}\sum_{t=1}^n\mathbb{E}\left[\ell(y_t x_t^\top \theta_t)\right]\\
&=& \frac{1}{n}\sum_{t=1}^n \mathbb{E}\left[\ell(y x^\top \theta_t)\right]\\
&=& \mathbb{E}\left[\frac{1}{n}\sum_{t=1}^n \ell(y x^\top \theta_t)\right]\\
&\geq & \mathbb{E}\left[\ell\left(yx^\top \frac{1}{n}\sum_{t=1}^n \theta_t\right)\right]\\
&=& \mathbb{E}\left[\ell(yx^\top \bar{\theta}_n)\right]
\end{eqnarray*}
where the last inequality follows from Jensen's inequality, as $\ell$ is assumed to be a convex function.
# Question 4
In the context of Equation (3), as noted in the text, $z_t$ is defined by $z_t = \gamma \zeta_t$, where $\zeta_t$ is a Bernoulli random variable with mean $\pi(x_t,y_t,\theta_t)$. This is defined in the text.
# Question 5
The problem setting is the same. The main difference is that we consider sampling according to a sampling probability function $\pi$, while Liu and Li (2023) considered only some special cases of $\pi$ such as sampling proportional to conditional loss value. We discussed this in the related work Section 1.2.
---
Rebuttal 2:
Title: Proposed clarification for question 1
Comment: We propose to clarify this as follows in the introduction, where the added text is highlighted in bold:
> There is a large body of work on convergence of SGD algorithms, e.g. see Bubeck [2015] and Nesterov [2018]. These results are established for SGD algorithms under either constant, diminishing or adaptive step sizes. Recently, Loizou et al. [2021], studied SGD with the stochastic Polyak's step size, depending on the ratio of the loss and the squared gradient of the loss of a point. __Our work proposes an adaptive-window sampling algorithm and provides its convergence analysis, with the algorithm defined as SGD with a sampling of points and an adaptive step size update that conform to the stochastic Polyak's step size in expectation. This is unlike to the adaptive step size SGD algorithm by Loizou et al [2021] which does not use sampling.__
---
Rebuttal 3:
Title: Proposed clarification for Question 3
Comment: Proposed change to Theorem 3.1, changed highlighted in bold.
> Assume that $\rho^* > 1$, the loss function is the squared hinge loss function, and the sampling probability function $\pi$ is such that for all $u \leq 1$, $\pi(u) \leq \beta/2$ and
>
> $\pi(u) \geq \pi^*(\ell(u)) := \frac{\beta}{2}\left(1-\frac{1}{1+\mu\sqrt{\ell(u)}}\right)$
>
> for some constants $0 < \beta \leq 2$ and $\mu \geq \sqrt{2}/(\rho^*-1)$. Then, for any initial value $\theta_1$ such that $||\theta_1-\theta^*||\leq S$ and $\\{\theta_t\\}_{t>1}$ according to algorithm (1) with $\gamma = 1/R^2$,
>
> $\mathbb{E}\left[\ell(yx^\top \bar{\theta}\_n)\right] \leq \mathbb{E}\left[\frac{1}{n}\sum\_{t=1}^n \ell(y\_t x\_t^\top \theta\_t)\right]
\leq \frac{R^2 S^2}{\beta}\frac{1}{n},$
>
> __where $(x,y)$ is an independent sample of a labeled data point from $\mathcal{D}$.__
>
> Moreover, if the sampling is according to $\pi^*$, then the expected number of sampled points satisfies:
>
> $\mathbb{E}\left[\sum\_{t=1}^n \pi^*(\ell(y\_t x\_t^\top \theta\_t))\right] \leq \min\\{\{\frac{1}{2}R S \mu\sqrt{\beta} \sqrt{n},\frac{1}{2}\beta n\}\\}.$
---
Rebuttal Comment 3.1:
Title: Re: Rebuttal by Authors
Comment: I thank the authors for their response. I think that with proper revisions, the paper will be more easily parsed and more accessible to wider audience. I will raise my score. | Summary: The paper considers the active learning algorithms based on uncertainty and loss functions. The learner queries the label of an unlabeled sample with probability proportional to (some function of) the uncertainty/loss and updates the parameter according to some step size scheme. The authors generalize previous results under the strictly separable binary classification setting and general classification setting with convex loss and smooth equivalent loss. The authors later propose a Polyak-type step size scheme called Adaptive-Weight Sampling and prove its convergence. Numerical experiments verify the efficiency of AWS under both oracle and estimation of loss functions.
Strengths: 1. The analysis is solid;
2. The presentation is clear;
3. The Adaptive-Weight Sampling (AWS) algorithm provides a novel perspective for active learning literature.
Weaknesses: 1. The generalization of previous results seems not to be very essential;
2. The assumption of access to the exact loss function before querying seems too strong for theoretical analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I wonder if it is possible to relax the requirement of knowing the exact loss functions before querying the label of the sample point.
2. What's the comparison between the theoretical sample complexity of active learning (uncertainty/loss-based sampling) and that of passive learning (uniform sampling) in the considered problem setting?
3. The authors conduct experiments on "uniform sampling" + "constant step size", "loss-based sampling" + "constant step size", and "loss-based sampling" + "Polyak step size" to verify the effectiveness of the approach of loss-based sampling. For completeness, it is necessary to present the performance of using "uniform sampling" + "Polyak step size" in the numerical experiments.
(raised my rating from 5 to 6 after the rebuttal)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing insightful and useful comments.
# Weakness 1
We would like to clarify the reviewer's comment that we generalize previous results under the strictly separable binary classification setting and the general classification setting with convex loss and smooth equivalent loss. For the linearly separable binary classification setting, we provide new results on the convergence rates for loss-based sampling strategies. Previous work focused on margin of confidence uncertainty-based sampling (Raj and Bach [2022]) which is different. Our results on the convergence rates of constant-weight sampling for "general classification with convex loss" and "smooth equivalent loss" generalize the work by Liu and Li [2023]. Importantly, our convergence rate bounds allow for different sampling probability functions, going beyond specific sampling probability functions such as sampling proportional to a loss value.
# Weakness 2 / Question 1
For the theoretical convergence rate analysis, it may be possible to relax the requirement of knowing the exact loss function value before querying the label of the sample point. This would involve accounting for the estimation noise of the loss value in the convergence rate analysis. This is an interesting avenue for future research, as indicated in the conclusion section. Note that in our experiments Figure 2, we evaluated algorithms that sample points using a loss value estimator, thus alleviating the need to know the exact loss value before querying the label of the sample point.
# Question 2
For the question on the comparison of sampling complexities of active learning algorithms and those using uniform sampling of data points, we regard this as an interesting question for theoretical study. Such a study should consider both convergence rate and sampling complexity of algorithms. Please __see the comment below__ for a detailed discussion for the class of projected SGD algorithms using a loss-based sampling.
# Question 3
We thank the reviewer for this suggestion. We conducted additional experiments to include the case of "uniform sampling" with "Polyak step size." We will include this case in Figure 1 of our revised paper. The new experimental results are provided in the attached PDF file.
From the results, we observe that applying stochastic Polyak's step size with uniform random sampling leads to faster convergence compared to uniform sampling with regular SGD updates. Moreover, sampling according to absolute error loss with stochastic Polyak's step size substantially improves upon uniform sampling with stochastic Polyak's step size.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. The rebuttal addressed some of my concerns, e.g. the response to my Q3.
For Weakness 1, I still keep my opinion that the technical novelty is not very essential. I appreciate the part that generalizes Lemma 1 (28) in Raj and Bach's work (https://arxiv.org/pdf/2110.15784) to Assumption A.1, yet the remaining analysis is still the same. Also, for the equivalent loss part, Liu and Li's work has theoretically addressed the necessity of using "a function" ($\Pi$ in your work) of the loss (Section 4.1 in https://arxiv.org/pdf/2307.02719), which makes some of this work's results (e.g. Thm 3.3) predictable.
For Weakness 2 and Question 1, maybe your choice of loss-based functions can be justified by comparing yours with more literature (e.g. Towards a statistical theory of data selection under weak supervision, ICLR 2024). I also recommend the authors include more discussions on the cases when the exact loss function is unknown; I'm quite curious if the chosen method (Random Forest) is the best way to estimate the loss.
One more question for your response to my Question 2: Does the loss-based sampling have a theoretically better sampling complexity/convergence rate compared to uniform sampling for the general classification setting?
Currently, I'm not against accepting this paper (as can be seen from my positive rating); I just want to discuss these questions with the authors.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer UMCS's comment on Weakness 1
Comment: Regarding Weakness 1, in our response, we aimed to emphasize that we provide new results identifying conditions and characterizing the convergence rate and sampling complexity for the linearly separable case. This includes Theorem 3.1 for the squared hinge loss function, as well as several results presented in the appendix, such as Theorem A.5 for sampling proportional to zero-one loss or absolute error loss, and Theorems A.6 and A.7 for a generalized smooth hinge loss function. To the best of our knowledge, these results are novel and provide insights into the performance of loss-based sampling policies.
As noted by the reviewer, a key technical aspect in the proofs is the convergence rate Lemmas A.2 and A.3, which generalize Lemma 1 in Raj and Bach's work, which is restricted to specific choices of loss functions and uncertainty-based sampling. This generalization was instrumental in establishing our results and may prove useful in future work. We have discussed how our generalization relates to Lemma 1 by Raj and Bach, as seen in lines 196-203. Additionally, note that the proofs of Theorems 3.1, A.5, A.6, and A.7 also require additional technical steps.
---
Reply to Comment 1.1.2:
Title: Response to reviewer UMCS's "One more question for your response to my Question 2"
Comment: Regarding the question of whether loss-based sampling theoretically has a better sampling complexity or convergence rate than uniform sampling in the general classification setting, this is an interesting question that warrants further study. Our experimental results demonstrate instances where sampling based on absolute error loss shows better sampling complexity and convergence rate than uniform sampling.
The convergence rate result in Theorem 3.3 provides an upper bound on the expected cumulative training loss for loss-based sampling according to a sampling function $\pi$. This bound accommodates uniform sampling as a special case, and in that case, it conforms to the bound that can be obtained through the convergence analysis of projected SGD, as outlined in our response.
One may compare the convergence rate and sampling complexity bounds for sampling according to a sampling function $\pi$ and uniform sampling, as discussed below.
From Theorem 3.3, we have:
$$
\mathbb{E}\left[\frac{1}{n}\sum_{t=1}^n \ell(\theta_t)\right]
\leq \bar{\ell}
$$
where:
$$
\bar{\ell}: = \ell^*_\pi + \Pi^{-1}\left(\frac{\sqrt{2}S\sigma_{\pi}}{\sqrt{n}}\right) + \Pi^{-1}\left(\frac{LS^2}{n}\right)
$$
and
$$
\ell^*_\pi := \inf_{\theta\in \Theta}\Pi^{-1}(\mathbb{E}_x[\Pi(\mathbb{E}_y[\ell(x,y,\theta)\mid x])]).
$$
For uniform sampling with probability $p\in (0,1]$, we have $\pi(x) = p$, $\Pi(x) = px$ and $\Pi^{-1}(x) = x/p$. Let $\ell^*:=\inf_{\theta \in \Theta} \ell(\theta)$. By the convexity of $\Pi$ (as $\pi$ is an increasing function), note that $\ell^*_\pi\geq \ell^*$ for every $\pi$.
From Theorem 3.3, for uniform sampling with probability $p \in (0,1]$, it holds that:
$$
\mathbb{E}\left[\frac{1}{n}\sum_{t=1}^n \ell(\theta_t)\right]
\leq \ell^* + \frac{\sqrt{2}S\sigma}{\sqrt{pn}} + \frac{LS^2}{pn}
$$
where $\sigma$ is such that
$$
\mathbb{E}\_{x,y}[||\nabla\_\theta \ell(x,y,\theta)||^2] - p ||\mathbb{E}\_{x,y}[\nabla_\theta \ell(x,y,\theta)]||^2\leq \sigma^2 \hbox{ for every } \theta \in \Theta.
$$
For the discussion of sampling complexity, consider the case where $\pi$ is a concave function. By Lemma 3.5, by sampling according to $\pi$, the expected number of samples is upper bounded by $\pi(\bar{\ell})n$. Obviously, for uniform sampling with probability $p$, the expected number of samples is $pn$. Therefore, the sampling complexity for sampling according to the sampling function $\pi$ is lower or equal to that for uniform sampling with probability $p$ if the following condition holds:
$$
p \geq \pi(\bar{\ell}).
$$
The convergence rate upper bound under sampling according to $\pi$ is smaller than or equal than that under uniform sampling if the following condition holds:
$$
\bar{\ell}\leq \ell^* + \frac{\sqrt{2}S\sigma}{\sqrt{pn}} + \frac{LS^2}{pn}.
$$
By a straightforward calculus, it can be shown that this condition is equivalent to
$$
\sqrt{pn}\leq S\frac{\sigma + \sqrt{\sigma^2 + 2L(\bar{\ell}-\ell^*)}}{\sqrt{2}(\bar{\ell}-\ell^*)}.
$$
Combining with $p \geq \pi(\bar{\ell})$, it is necessary that
$$
\sqrt{\pi(\bar{\ell})n}\leq S\frac{\sigma + \sqrt{\sigma^2 + 2L(\bar{\ell}-\ell^*)}}{\sqrt{2}(\bar{\ell}-\ell^*)}
$$
which is also sufficient when $p = \pi(\bar{\ell})$. The condition can be further analyzed for different cases including $\ell_\pi^* > \ell^*$ and $\ell_\pi^* = 0$ (and hence $\ell^*=0$).
In our revision, we will include further discussion on the dependence of the convergence rate and sampling complexity on the sampling function $\pi$. Additionally, we will highlight as an open research problem the need to study the tightness and comparison of convergence rates and sampling complexities for the general classification setting in the conclusion section.
---
Rebuttal 2:
Title: Detailed response to question 2
Comment: We first discuss convergence rate bounds. For the linearly separable binary classification case, in our paper, we provide conditions under which loss-based sampling policies achieve a convergence rate of $O(1/n)$ for the squared hinge loss function, the same as the perceptron algorithm (Theorem 3.1, and see also Theorems A.5 and A.6 in the appendix). It is well known that standard projected SGD algorithm with a constant step size guarantees a convergence rate of $O(1/\sqrt{n})$ for smooth convex loss functions. This convergence rate can be improved to $O(1/n)$ by using stochastic Polyak's step size, as shown by Loizou et al. [2021]. Our Theorem 3.3 shows that a convergence rate of $O(\Pi^{-1}(1/\sqrt{n}))$ can be achieved by a constant-weight, loss-based sampling with the sampling probability function $\pi$, where $\Pi$ is the primitive function of $\pi$. Our adaptive-weight sampling algorithm guarantees the same convergence rate as the algorithm that samples every point using projected SGD with stochastic Polyak's step size (Loizou et al. [2021]) under conditions given in Theorem 3.6. For our adaptive-weight sampling algorithm, with a constant sampling probability $\pi(x,y,\theta) = p$ for all $x,y,\theta$, the convergence rate bound in Theorem 3.6 holds provided that $p$ satisfies the condition in Equation (6).
We next discuss sampling complexity bounds, i.e., the bounds on the expected number of sampled points by an algorithm. The sampling complexity of an algorithm that samples each point is clearly $n$, where $n$ is the number of SGD updates. This stands in contrast to the sampling complexity of $O(\sqrt{n})$, which is shown to hold for active learning algorithms using certain loss-based policies (Theorem 3.1 and Lemma 3.5). We next consider the case of "uniform sampling," where each data point is sampled with a fixed probability $p$. In this case, the expected number of sampled points is $pn$.
For the linearly separable binary classification case, with uniform sampling with probability $p=\beta/2$, the convergence rate bound in Theorem 3.1 holds and the sampling complexity is $O(\beta n)$. This stands in contrast to the sampling complexity of $O(\sqrt{\beta n})$ under loss-based sampling according to $\pi^*$ defined in Theorem 3.1.
For the general classification setting, we provide the following discussion. For the projected SGD with uniform sampling, we can derive convergence rate bounds by using known results for projected SGD with a constant step size. Consider the projected SGD as in Equation (1) of our paper, with stochastic step size $z_t$ equal to the product of a fixed step size $\gamma > 0$ and $\zeta_t$, where $\zeta_t$ is a sequence of independent Bernoulli random variables with mean $p$. Then, we may regard this as an SGD algorithm with a constant step size $\gamma p$ and the stochastic gradient vector $g_t = (z_t/p) \nabla_\theta \ell(x_t,y_t,\theta_t)$. This stochastic gradient vector is clearly an unbiased estimator of $\nabla\_\theta \ell(x_t,y_t,\theta_t)$ and we have $\mathbb{E}[||g_t - \nabla\_\theta \ell(x\_t,y\_t,\theta\_t))||\_2^2\mid x\_t, y\_t, \theta\_t] = (1/p-1)||\nabla\_\theta \ell(x\_t,y\_t,\theta\_t)||^2$. Thus, we have $\mathbb{E}[||g\_t - \nabla\_\theta \ell(x\_t,y\_t,\theta\_t))||\_2^2\mid x\_t,y\_t,\theta\_t]\leq (1/p-1)\sigma^2$, for $\sigma$ such that $||\nabla \ell(x,y,\theta)||\_2^2\leq \sigma^2$, for every $x,y,\theta$.
For smooth and convex loss functions, by a well-known convergence result for projected SGD with a constant step size (covered by Theorem 6.3 in Bubeck [2015]), it can be readily shown that for the step size set to $p/(L+(\sigma/S)\sqrt{1/p-1}\sqrt{n/2})$, the following convergence rate bound holds:
$$
S\sigma \sqrt{\frac{2(1-p)}{pn}} + \frac{L S^2}{n}
$$
where $L$ and $S$ are defined in our paper. Hence, we have the sampling complexity $O(pn)$ and the convergence rate bound $O(1/\sqrt{pn})$. From Theorem 3.3 in our paper, we have the convergence rate bound $O(\Pi^{-1}(\sqrt{2}S\sigma_\pi/\sqrt{n}))$, and with additional assumption that $\pi$ is concave, the sampling complexity $O(\pi(\Pi^{-1}(\sqrt{2}S\sigma_\pi/\sqrt{n}))n)$. For the uniform sampling case, we have $\pi(x) = p$ and $\Pi^{-1}(x) = x/p$. Further noting that $\sigma_\pi \leq \sqrt{p}\sigma$, we have the convergence rate bound of $O(1/\sqrt{np})$ and the sampling complexity of $O(pn)$ holds, both conforming to what we derived above.
In our experimental results (see revised Figure 1 in the attached PDF), we demonstrate that faster convergence can be achieved using loss-based sampling compared to uniform sampling for comparable sampling complexity.
---
Rebuttal 3:
Title: General response to UMCS's comment
Comment: We thank the reviewer for their additional comments, interesting questions for discussion, and useful references. We are glad that the additional experimental results we provided have adequately addressed the concern raised by the reviewer.
We further elaborate on each weakness & question separately in the comments below.
---
Rebuttal 4:
Title: Response to Reviewer UMCS's comment on Weakness 2 and Question 1
Comment: We thank the reviewer for bringing the work by Kolossov, Montanari, and Tandon (2024) to our attention. We will add a discussion of this work in the related work section. Additionally, we will compare the sampling functions studied therein with those considered in our work. The examples of loss-based sampling functions we present are motivated by practical applications, such as the use of absolute error loss in certain industrial contexts.
We will include additional discussion on the case where the exact loss function is unknown. This scenario is addressed in our experimental results, where we used a Random Forest estimator. We believe that the concrete choice for Random Forest isn't of particular importance to our results: the important finding is that there exists regression models that lead to numerical results that show good conformance to those obtained using the exact loss values. Such results could also have been obtained with other alternative choices of the loss estimator (as we show through new experiments that we outline below), and this choice of loss estimator is not of material importance.
It is important to note that proposing a specific loss estimator is beyond the scope of our work. We will provide further discussion on the loss estimator we employed and other estimators we tested.
**Additional experiments with neural network based loss estimator**
Concretely, we have ran additional experiments with a neural network based loss estimator, and like with the RF loss estimator, have obtained numerical results that conform closely to the results that we obtained using the exact loss values.
Per the [NeurIPS 2024 FAQ for authors](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ#:~:text=the%20author%20rebuttal%3F-,No.,all%20linked%20files%20are%20anonymized), we're told not to send links in any part of the response, and unfortunately we do not have the ability to edit the official rebuttal anymore, only post comments, so we can't upload a PDF with the figure with the new results. However, we have sent a message to the AC with an anonymized link to the figure with results with the neural network based loss estimator. | Summary: This submission studies the convergence guarantees of and bounds on the expected number of samples used when using loss based active learning. They additionally propose a new sampling scheme that combines loss based sampling and a Polyak step size and provide convergence guarantees. Their analysis covers multiple models and loss functions. They proposed methods further evaluated with numerical experiments on multiple datasets.
Strengths: The problem addressed is interesting and has not been addressed in the literature. In order for loss based sampling strategies to be effectively deployed in the wild this sort of analysis is necessary. The algorithmic contributions are also of interest, and borrow a well known approach to setting step sizes (Polyak step sizes) from the optimization community to define their Adaptive-Weight Sampling scheme. This combination is a novel idea and a starting point to explore other methods of setting step sizes in this active learning context.
Weaknesses: While the technical contributions of this paper are interesting, the primary weakness is the communication of results. This reviewer has an optimization background rather than an active learning one, but even accounting for this the organization and exposition of results was challenging follow. For example, rather than defining algorithms in a LaTeX algorithm block as is standard in literature, authors simply refer to modifying the (projected) SGD update. While the general idea of active learning is simple, it is also unclear which variant of active learning the authors are studying. Based on the step size being defined as a Bernoulli variable and the experiments conducting one pass over the dataset, it appears that the authors are studying a “streaming” approach to active learning, where the decision to evaluate the label or not is made upon encountering each datapoint. It appears that the selection operation is to ignore when the Bernoulli sample is zero and evaluate the loss otherwise. This is not made clear in the writing, and an optimization audience may be confused as to why some steps will have step size zero. This understanding could be incorrect, which is likely due to the lack of clarity and motivation of the definitions and algorithms. The authors are encouraged to be more explicit about what problem they are trying to solve, and what the exact definitions of their algorithm is.
Technical Quality: 3
Clarity: 2
Questions for Authors: To address the listed lack of clarity, is the above understanding correct?
Aside from the lack of clarity mentioned above, the primary question from an optimization perspective is the choice of the Polyak step size. Given there are many choices for “adaptive” step size methods, it would be interesting to know the motivation behind this choice, and if other methods were considered.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed some limitations in their work and potential future directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating the importance of our work in ensuring the effective deployment of loss-based sampling strategies in practice, and suggesting where we can improve the presentation quality.
# Question 1
In response to the first question raised by the reviewer: the reviewer's understanding of our problem formulation and algorithm definition is correct. We study convergence rates in the streaming computation setting, where the decision to evaluate the label or not is made upon encountering each data point. This is the same setting as in the convergence analysis of uncertainty-based sampling policies by Raj and Bach [2022]. Our algorithm is defined as the projected SGD in Equation (1) with a stochastic step size ($z_t$ in iteration $t$). Different active learning algorithms are accommodated by appropriately defining the distribution of the stochastic step size $z_t$, allowing for loss- and uncertainty-based sampling. Specifically, we consider constant-weight and adaptive-weight sampling policies, which are explained in Section 2, with further details provided in Section 3 for each algorithm studied. The decision not to evaluate the label of the data point in iteration $t$ implies a stochastic step size of zero ($z_t = 0$). We appreciate the reviewer's comment that a reader with an optimization background may get confused because our setup is similar to but different from the standard projected SGD. The meaning of the stochastic step size in the active learning setting is also different from what is typical in optimization. In our revision, we will add text to emphasize that we study convergence in the streaming computation setting and provide additional explanations for our definitions and algorithms. Specifically, we will make the following modifications in the problem statement section.
### Revision of Section 2: (added text in bold)
__Consider the setting of streaming algorithms where a machine learning model parameter $\theta_t$ is updated sequentially, upon encountering each data point, with $(x_1,y_1),\ldots, (x_n,y_n) \in \mathcal{X}\times \mathcal{Y}$ denoting the sequence of data points with the corresponding labels, assumed to be independent and identically distributed with distribution $\mathcal{D}$. Specifically, we consider the class of projected SGD algorithms defined as: given an initial value $\theta_1\in \Theta$,__
$\theta\_{t+1} = \mathcal{P}\_{\Theta\_0}\left (\theta\_t - z\_t \nabla\_\theta \ell(x\_t, y\_t, \theta\_t)\right), \hbox{for}\hspace{0.1cm} t \geq 1$ (Equation 1)
where $\ell:\mathcal{X}\times \mathcal{Y}\times \Theta\rightarrow \mathbb{R}$ is a training loss function, $z_t$ is a stochastic step size with mean $\zeta(x_t, y_t, \theta_t)$ for some function $\zeta: \mathcal{X} \times \mathcal{Y} \times \Theta \mapsto \mathbb{R}\_+$, $\Theta\_0 \subseteq \Theta$, and $\mathcal{P}\_{\Theta_0}$ is the projection function, i.e., $\mathcal{P}\_{\Theta_0}(u) = \arg\min\_{v \in \Theta_0} ||u - v||$. Unless specified otherwise, we consider the case $\Theta\_0 = \Theta$, which requires no projection. For binary classification tasks, we assume $\mathcal{Y} = \{-1,1\}$. For every $t>0$, we define $\bar{\theta}\_t = (1/t)\sum\_{s=1}^t \theta\_s$.
__By defining the distribution of the stochastic step size $z_t$ in Equation 1 appropriately, we can accommodate different active learning and data subset selection algorithms. In the context of active learning algorithms, at each step $t$, the algorithm observes the value of $x_t$ and decides whether or not to observe the value of the label $y_t$ which affects the value of $z_t$. Deciding not to observe the value of the label $y_t$ implies the step size $z_t$ of value zero (not updating the machine learning model).__
For the choice of the stochastic step size, we consider two cases: (a) \emph{Constant-Weight Sampling}: a Bernoulli sampling with a constant step size, and (b) \emph{Adaptive-Weight Sampling}: a sampling that achieves stochastic Polyak's step size in expectation.
For case (a), $z\_t$ is the product of a constant step size $\gamma$ and a Bernoulli random variable with mean $\pi(x\_t, y\_t, \theta\_t)$.
For case (b), $\zeta(x, y, \theta)$ is the "stochastic" Polyak's step size, and $z\_t$ is equal to $\zeta(x\_t, y\_t, \theta\_t) / \pi(x\_t, y\_t, \theta\_t)$ with probability $\pi(x\_t, y\_t, \theta\_t)$ and is equal to $0$ otherwise. Note that using the notation $\pi(x,y,\theta)$ allows for the case when the sampling probability does not depend on the value of the label $y$.
# Question 2
With regard to the question on the motivation for studying the adaptive step size according to stochastic Polyak's step size, our primary motivation is its fast convergence rate, as shown by Loizou et al. [2021], both theoretically and experimentally. Specifically, it achieves a convergence rate of $O(1/n)$ in a smooth convex optimization setting, which is an improvement compared to a projected SGD with a constant step size, which achieves a convergence rate of $O(1/\sqrt{n})$. The experimental results in Loizou et al. [2021] demonstrate cases where the adaptive step size according to stochastic Polyak's step size outperforms several benchmarks used for comparison, including the popular Adam optimizer. Our work shows that the aforementioned theoretical convergence rate guarantee of projected SGD with adaptive step size according to stochastic Polyak's step size can be achieved in a data sampling or active learning setting, as shown in Theorem 3.6. Loizou et al. [2021] did not study this setting. As for considering other adaptive step sizes, future work may study their convergence properties when used in conjunction with sampling, as in our adaptive-weight sampling algorithm. We will add a note to propose this as an interesting direction for future research in the conclusion section.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the clarifications here and in a revision of their paper, along with the insights about their choice of step size. I will maintain my score. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments. We appreciate the positive feedback recognizing the problem we study as interesting, novel, and important, as well as the positive assessment of our results, which include both theoretical and experimental analyses. We also value the technical questions and suggestions for improving the clarity of our presentation.
Our study is motivated by the increasing interest in using active learning algorithms to effectively train machine learning models, focusing on the convergence rate of the training loss function and data labeling cost, as well as scalable training of machine learning algorithms by using a subset of training data points. We focus on data selection methods based on data point loss values (loss-based methods)
as those have recently gained attention both in theory (we list several references in the paper) and practice (it is deployed in various industry applications),
while their performance guarantees are not well understood. Our theoretical framework and results also cover more common uncertainty-based methods, where data points are selected based on a notion of prediction uncertainty.
The reviewers expressed different opinions regarding the presentation quality of our paper, with one reviewer finding it clear and others suggesting areas for improvement.
We appreciate the reviewer's feedback and acknowledge that our initial submission lacked clarity, in particular in explaining that we consider a class of streaming algorithms, focusing on projected SGD algorithms with stochastic step size, and how the concept of the stochastic step size relates to active learning and data subset selection problems. A crucial point to understanding this connection is that a zero step size can be seen as not selecting a point for labeling, and hence, the connection to loss-based active learning arises from stochastic step sizes that are specific to each point and depend on its estimated loss. In our review responses, we proposed concrete revisions to clarify these points.
Our theoretical results on convergence rates and sampling complexity of loss-based methods apply to active learning under the assumption that the algorithm has access to the exact loss value of each data point, which is used to decide whether to query the value of a data point's label. In practice, a loss-based active learning algorithm uses a noisy estimate of loss values, which is considered in our experimental results, showing good conformance with our theoretical results. We regard our theoretical results as an important step towards understanding loss-based active learning algorithms. In the context of the data subset selection problem, the algorithm can observe the label of each training data point, and thus exact loss values of data points are available to the algorithm.
A reviewer noted that we missed providing experimental results for one combination of sampling and adaptive step size, namely, uniform sampling with stochastic Polyak's step size. In response, we conducted additional experiments and included the results in our responses (see link to PDF below).
Pdf: /pdf/eaf9159f958a2659b53f7176c8aab256ff4c161e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Language Models as Hierarchy Encoders | Accept (poster) | Summary: This paper proposes a method that leverages a non-Euclidean representation space to better encode hierarchical relationships between entities. Specifically, a hyperbolic embedding space is defined, wherein items closer to the origin are higher-level concepts, and items further from the origin are lower-level concepts; items closer to each other typically have closer semantic relationships, such that it is then straightforward to translate between geometric distances and inheritance relationships between concepts—even when the relationship was not explicitly encoded in the training set.
In a series of experiments, the authors leverage sentence-Transformer models as bases to predict whether two entities have taxonomic relationships in a zero-shot manner. Comparisons are performed between the base sentence-Transformer models, the same models with task-specific fine-tuning, and the proposed hyperbolic encoding method. It is observed that the proposed method outperforms more naive methods.
Strengths: * The proposed embedding space works well as a way to categorize hierarchical relationships between inputs. It also makes it easy to tell what the relationship is between two concepts in latent space, as distances between concepts and distances to the origin both have well-defined meanings in this space.
* The method can be run with modest compute resources, making it accessible and easily scalable as sentence-Transformer models scale and improve.
* The idea is interesting and, to my knowledge, novel.
Weaknesses: 1. As the authors acknowledge, it is not clear what other knowledge is lost when moving to this new latent space. While hierarchical relationships between entities are now easy to understand, does the new space also preserve properties of each independent entity? Evaluations on a wider variety of downstream tasks would be helpful for establishing what kinds of semantic information are preserved.
2. The analysis in Table 4 could have been cherry-picked. A more systematic version of the analysis in Table 4 is possible, and would be nice to see: specifically, one could search over a larger subset of WordNet, quantifying whether more deeply nested concepts correlate with higher h-norms, and whether more closely related concepts have significantly lower hyperbolic distances than unrelated but semantically similar concepts.
3. When we classify whether two entities are taxonomically related, we usually care more about whether we can leverage these relationships to improve performance on some other metric. For example, better representational quality could lead to better deductive reasoning. Consider, for example, a counterfactual reasoning task where we say “Birds have fur. Is it therefore true that sparrows have fur?” A model that better captures taxonomic relationships should be better at this task. What I’m getting at is that it would be nice to evaluate on downstream tasks that require entity classification as a key part, rather than directly evaluating entity classification (a rare task). This would demonstrate the utility of this method, and increase interest and impact among a broader audience.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why sample to a consistent 1:10 ratio of positive:negative examples? Was this decision made empirically?
2. How long did it take to run your proposed method compared to standard fine-tuning? Is there a better trade-off between performance and compute?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We address your comments and questions below:
------
**Regarding Weakness 1**:
Our current focus in this paper is on enabling transformer encoder-based language models to explicitly encode hierarchies. We recognise the importance of evaluating the preservation of other semantic information. In our future work, we plan to explore how our method impacts semantic properties unrelated to hierarchies, to ensure a balanced representation that retains existing language understanding.
------
**Regarding Weakness 2**:
We compare the Pearson correlation coefficients across different hyperbolic models to measure the linear relationship between entities' hyperbolic norms and their depths in WordNet. Our analysis shows that all hyperbolic models lead to a positive correlation between norms and depths, as expected. However, our HiT model demonstrates a stronger correlation than both PoincaréEmbed and HyperbolicCone.
|HiT | PoincaréEmbed | HyperbolicCone |
|:-------------------------:|:-----------------------------:|:----------------------------:|
| 0.346 | 0.130 | 0.245 |
>Table: Statistical correlations between WordNet entities' depths and their hyperbolic norms in HiT, Poincaré Embed, and HyperbolicCone, respectively.
We will add this analysis in the revised manuscript.
The **Hard Negative** setting for each task is designed to determine if our models can distinguish unrelated but semantically similar concepts (e.g., sibling entities), as you suggested. Our results demonstrate that HiT models consistently outperform other baselines in this setting. The hyperbolic distances between closely related concepts will not be significantly lower than unrelated concepts because our loss functions are optimised for relative differences, but they are sufficiently distinct to enable good predictions.
------
**Regarding Weakness 3**:
Our current evaluation tasks aim to understand if models can generalise from asserted subsumptions to inferred (transitive) and unseen (inductive) subsumptions, which is a critical capability for completing missing knowledge or enriching new knowledge in taxonomies and hierarchies. We acknowledge the importance of further downstream tasks to demonstrate more real-world utilities. Taking your counterfactual reasoning example, it can be formulated as predicting if “sparrow” is subsumed by “something that has fur”, which can be seen as an existential restriction (a kind of complex concept) in the context of ontology. We will extend our settings to handle complex, non-atomic entities in future work.
------
**Regarding Question 1**:
We followed the evaluation setting in [1] to maintain consistency with existing hyperbolic baselines.
------
**Regarding Question 2**:
Using our GPU resources (see Appendix), taking all-MiniLM-L6-v2 as the base model, hierarchy re-training takes approximately 65 minutes, while standard fine-tuning takes about 17 minutes. Standard fine-tuning requires only a few epochs for convergence, but its performance is capped and cannot be further improved. Early stopping is a possible optimisation for hierarchy re-training to save time, but it was not considered in this work. We will explore this and other optimisation strategies in future work.
------
- [1] Ganea, Octavian, Gary Bécigneul, and Thomas Hofmann. "Hyperbolic entailment cones for learning hierarchical embeddings." ICML (2018).
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response. I appreciate the extra analysis in response to Weakness 2, and consider this addressed.
The rest of the points would be very helpful in demonstrating more general impact, but I suppose these would be effort-intensive. As-is, the paper feels like it could benefit greatly from more explicitly evaluating generalizability outside of this narrow task setting. I therefore would like to keep my score the same. That said, if the paper is borderline after taking all other reviews into account, consider my vote in favor of accepting.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We appreciate your acknowledgment of the additional analysis addressing Weakness 2 and your willingness to consider the paper positively. | Summary: The paper introduces a method to re-train transformer-based language models as Hierarchy Transformer encoders (HITs), using the properties of hyperbolic space to enhance their ability to encode hierarchical structures in language.
Strengths: 1. The utilization of hyperbolic space to encode hierarchical structures in language models is a creative and theoretically sound approach, as hyperbolic space naturally lends itself to representing hierarchies.
2. The paper provides extensive experimental results, showing that HITs consistently outperform traditional pre-trained and fine-tuned models across multiple datasets and tasks.
3. The methodology, experimental setup, and results are clear, making the paper accessible to readers.
Weaknesses: 1. The motivation is a bit overstated since it is heavily on the claim that current language models are significantly deficient in encoding hierarchical information. Thus, improvements can certainly be made.
2. The application of hyperbolic spaces in language models, while interesting, isn't entirely quite inspiring as previous works have explored similar ideas. The distinction between this and prior approaches isn't as significant as suggested.
3. The downstream tasks selected by this study have deviated from those for the LLMs.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We address your comments and questions below:
------
**Regarding Weakness 1**:
In Introduction, we acknowledge that hierarchical information has been considered in existing language model studies. Our claim emphasises the lack of **explicit geometric interpretability in hierarchical encoding**. For instance, as highlighted in [1], pre-trained language models often fail to capture the transitivity property inherent in hierarchical relationships. Our experiments demonstrate that standard fine-tuning, despite its strengths, cannot effectively capture this transitivity (and does not have suffcient interpretability) compared to our proposed HiT method. We will revise our contribution statement to make it clearer.
------
**Regarding Weakness 2**:
The major differences of our work compared to existing related works are discussed in Sections 1 & 5. In particular, we mentioned that:
- Previous works on pre-trained language models considering hierarchies did not focus on explicit geometric interpretability (line 338); while our work seeks to construct a language model-based hierarchy encoder with such interpretability.
- Previous hyperbolic embeddings did not support inductive prediction (line 343) or did not handle unseen entities effectively (line 344-345); while our models can naturally support inductive prediction on unseen data.
- [2] investigated adding a downstream hyperbolic layer to pre-trained LMs for classification tasks, whereas our work focuses on explicitly encoding hierarchies without requiring additional trainable parameters (line 346-349).
------
**Regarding Weakness 3**:
Our evaluation tasks are designed to assess whether **transformer encoder-based language models** can be trained to capture hierarchical structures by their ability to generalise from asserted subsumptions to inferred and unseen subsumptions. Our mixed-hop prediction task, which involves predicting missing subsumptions between arbitrary entities (potentially unseen), is a critical knowledge-intensive task for completing or enriching knowledge in hierarchies and taxonomies.
------
- [1] Ruixi Lin and Hwee Tou Ng. “Does bert know that the is-a relation is transitive?” ACL (2022).
- [2] Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, and Liping Jing. “Probing bert in hyperbolic spaces.” ICLR (2020). | Summary: This paper proposes a new way to retrain encoder-based language models into hierarchy encoders. Specifically, they propose to recast the output embedding space onto a Poincaré ball and retraining with the designed loss functions for organizing entities into hierarchy. The experiments on real world datasets like WordNet demonstrates the advantages of the proposed approach.
Strengths: - The paper is well-written, scoped, and nicely organized.
- The topic of addressing hierarchy with language models is interesting.
- The proposed approach is novel and the corresponding experimental results validate the effectiveness of the proposed method.
Weaknesses: - Can you explain if the proposed method can be used to decoder-only models?
- Will frequency of entities in the data impact the hierarchy captured in the representations?
Technical Quality: 3
Clarity: 4
Questions for Authors: - see. weakness
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss their limitation in the sec 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We address your comments and questions below:
-------
**Regarding Weakness 1**:
Our current method is specifically designed for encoder-based language models due to their ability to produce embeddings with **more straightforward semantic meanings**. Decoder-based models, which generate tokens based on previously generated ones, present a **challenge in directly editing their embeddings to assign geometric interpretability**. Retrieving meaningful semantic embeddings from decoder-based LLMs is itself a challenging research question. This is an area we recognise as valuable for future research. Recent works like [1] attempt to enable typical encoder training on decoder-based LLMs by modifying attention mechanisms and specifically masking the next token. We believe our approach **can be adapted to the decoder-based framework when a more systematic way of embedding extraction from decoder-only models is developed**. Despite this, our encoder-based models can still support decoder-based models by providing hierarchy-aware context, which can enhance the generation process.
-------
**Regarding Weakness 2**:
Frequent entities receive more exposure during training, resulting in more detailed and fine-grained hierarchical representations. This can lead to a similar challenge faced in many machine learning works: handling long-tail entities that appear less frequently. However, as our models are re-trained from pre-trained LMs, the distribution of entities in the pre-training data also has an impact. Addressing this aspect is beyond the current scope of our work.
-------
- [1] BehnamGhader, Parishad, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. "LLM2vec: Large language models are secretly powerful text encoders." arXiv (2024).
---
Rebuttal Comment 1.1:
Title: thank you for the rebuttal.
Comment: thank you for your response.
---
Rebuttal 2:
Comment: We sincerely appreciate your response and wish our rebuttal has addressed your concerns. | Summary: This paper presents a novel approach called Hierarchy Transformer encoders (HITs) to retrain transformer-based language models to better encode hierarchical structures in language. The method involves situating the output embedding space of pre-trained language models within a Poincaré ball and training on hyperbolic clustering and centripetal losses. The authors evaluate HITs against pre-trained LMs, fine-tuned LMs, and hyperbolic embedding baselines on tasks including multi-hop inference and mixed-hop prediction. Results show that HITs consistently outperform baselines, demonstrating improved ability to capture hierarchical relationships and generalize across hierarchies.
Strengths: 1. The paper introduces a novel method to explicitly encode hierarchies in language models without adding new parameters.
2. The authors conduct extensive experiments across multiple tasks and datasets, and show the effectiveness of their approach.
Weaknesses: 1. The proposed method requires predefined hierarchies which may limit its generalization, especially when new entity arrives while no relation is presented for the new entity.
2. Applications of the learnt entity embedding are desired to show the effectiveness of the embedding. For example, how can the embedding be used in downstream tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you show more examples on how hierarchies are encoded through the proposed approach?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and feedback. We address your comments and questions below:
--------
**Regarding Weakness 1**:
Our HiT models **can deal with new entities** and the capability of predicting subsumptions between arbitrary entity pairs is one of the key highlights. Our Mixed-hop prediction task specifically examines a model's ability to make **inductive predictions involving unseen, new entities**. During evaluation, we exclude 10% of the subsumptions, which can include these unseen entities, to test this capability. Additionally, we conduct transfer evaluations across three different hierarchies. In all these settings, the HiT models significantly surpass the baselines, demonstrating their ability to generalise and handle new entities effectively.
--------
**Regarding Weakness 2**:
The primary focus of this work is to explore **explicit hierarchy encoding** within **transformer encoder-based language models**. Our evaluation tasks are tailored for this purpose and align with evaluation in prior works like [1] and [2], which aim to construct **geometrically interpretable** hierarchy embeddings, while extending to real-world scenarios where entities and relationships can be new and unseen. Predicting subsumptions and other hierarchical relationships between arbitrary entities is a crucial task for completing missing knowledge or enriching new knowledge in taxonomies and hierarchies, as well as querying hierarhical contexts from them. We will explore more specific use-cases and downstream tasks of HiTs in future work.
--------
**Regarding Question 1**:
Below shows two example hierarchical paths in WordNet:
```
person -> actor -> comedian
fluid -> liquid -> water
```
and their embeddings have the following statistics:
| | person | actor | comedian | fluid | liquid | water |
|:---------|---------:|--------:|-----------:|--------:|---------:|--------:|
| **person** | 0 | 5.3 | 12 | 15 | 15.5 | 20 |
| **actor** | 5.3 | 0 | 10.9 | 18.2 | 18.7 | 22.9 |
| **comedian** | 12 | 10.9 | 0 | 21.5 | 21.3 | 25.5 |
| **fluid** | 15 | 18.2 | 21.5 | 0 | 5.1 | 11.1 |
| **liquid** | 15.5 | 18.7 | 21.3 | 5.1 | 0 | 9 |
| **water** | 20 | 22.9 | 25.5 | 11.1 | 9 | 0 |
| *h-norms* | 13.6 | 15.5 | 19.2 | 17.3 | 17.4 | 19.8 |
> Table: Hyperbolic distances between the embeddings of entities in above examples, along with their individual hyperbolic norms.
We can see that the hyperbolic norms of entity embeddings generally follow the hierarchical paths and related entities are closer than non-related ones.
--------
- [1] Nickel, Maximillian, and Douwe Kiela. "Poincaré embeddings for learning hierarchical representations." NeurIPS (2017).
- [2] Ganea, Octavian, Gary Bécigneul, and Thomas Hofmann. "Hyperbolic entailment cones for learning hierarchical embeddings." ICML (2018).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We appreciate your feedback and hope that our rebuttal has adequately addressed your concerns. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization | Accept (poster) | Summary: The paper presents a new framework MolPeg aimed at improving the efficiency and generalization of training models in molecular tasks using pretrained models. MolPeg introduces a novel DP technique that maintains two models with different update paces during training, leveraging the loss discrepancy between these models to score and select the most informative samples. This approach is applied in a source-free data pruning scenario, which does not require access to the original pretraining data. Extensive experiments across four molecular tasks demonstrate the effectiveness of MolPeg method.
Strengths: 1. The paper introduces a novel approach to data pruning by leveraging pretrained models and a loss discrepancy scoring mechanism.
2. The paper is well-organized, with a clear presentation of the problem, the proposed solution, and the experimental results.
3. The experiments are comprehensive, covering multiple datasets and tasks, and provide empirical evidence supporting the efficacy of MolPeg.
Weaknesses: 1. The effectiveness of MolPeg is highly dependent on the quality and suitability of the pretrained models utilized. If appropriate pretrained models are unavailable, the benefits of MolPeg may not be fully realized.
2. The proposed framework requires the maintenance and simultaneous updating of two models, which could increase both implementation complexity and computational overhead compared to simpler pruning methods.
3. Most DP methods demonstrate their effectiveness on widely-used datasets such as ImageNet. However, molecular datasets represent a more specialized field. It is crucial to test MolPeg with mainstream validation datasets to further demonstrate its effectiveness.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to weakness 3, why not validate the effectiveness of MolPeg with mainstream datasets? Molecular datasets belong to a more specialized field and seem less persuasive compared to datasets like ImageNet.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**1. The effectiveness of MolPeg is highly dependent on the quality and suitability of the pretrained models utilized.**
Thanks for your valuable comments. We agree with the reviewer’s point that the quality of pre-training can influence the effectiveness of our method. In source-free transfer learning, pretrained model is a hard-encoded module, and their variations naturally lead to performance changes. To address the reviewer's concern more thoroughly, we have conducted additional experiments on the HIV dataset using two pretrained models of different quality, obtained from the ZINC-100K and QM9 datasets, respectively. Compared to the PCQM4Mv2 dataset used in the main text, these two datasets are smaller in scale and exhibit more pronounced distribution shifts, resulting in poorer pretraining quality. The experimental results are shown in `Table 2` of the PDF file in the `General Response`.
We observe the following trends: MolPeg still achieves the best performance with these two pretrained models, demonstrating that while pretraining quality is a key factor affecting performance, **MolPeg remains the most robust and effective method compared to existing DP strategies.** We again thank the reviewer for pointing out this issue and hope this explanation addresses your concerns.
> **2. The proposed framework requires the maintenance and simultaneous updating of two models, which could increase both implementation complexity and computational overhead compared to simpler pruning methods.**
Thanks for your valuable comments. The complexity is undoubtedly crucial for efficient learning, but we respectfully disagree with the reviewer regarding our method being more costly compared to existing pruning methods. It is worth noting that most existing DP methods are static and require scoring and ranking **on the full dataset before training**, which involves a high cost and computational overhead. We address the reviewer's concerns from two perspectives: complexity analysis and experimental validation.
- **Complexity Analysis**: To compare the computational complexity of our method, MolPeg, with other data pruning methods, we use the following notations: $ N $ is the total number of data points, $ \delta $ is the pruning ratio, $ T $ is the total number of training epochs, $ t $ is the number of pre-scoring epochs required by other methods to determine scores, and $H_{fw}, H_{bw}$ denotes complexity of forward and backward operation. Note that compared to the TopK operation, the majority of the time consumption comes from the process of scoring the samples, as this process typically involves computing the loss, gradients, or even more complex metrics. Therefore, we have disregarded the complexity of the sorting process and focus on the forward and backward complexity.
| | MolPeg | Other methods |
| ------------------- | ----------------------------------- | ---------------------------------------- |
| **Forward passes** | $ \mathcal{O}(2H_{fw}(N)) $ | $ \mathcal{O}(H_{fw}((t + T\delta)N)) $ |
| **Backward passes** | $ \mathcal{O}(H_{bw}(T\delta N)) $ | $ \mathcal{O}(H_{bw}((t + T\delta)N)) $ |
While MolPeg performs more forward passes due to processing two models as noticed by the reviewer, the computational bottleneck is the backward pass, which typically takes more than two times the effort of a forward pass [1]. Other methods require $ tN $ extra backward passes for pre-scoring, which outweighs their advantage in forward passes.
Therefore, despite the added forward pass complexity, **MolPeg is overall more efficient due to the significantly reduced number of backward passes**.
[1] PyTorch Distributed: Experiences on Accelerating Data Parallel Training. VLDB 2020.
- **Experimental Validation**: In `section 5.1` of the manuscript, we have provided an experimental analysis of efficiency comparison. As seen in `Figure 4` of the manuscript, our method demonstrates significantly shorter runtime compared to previous DP methods and achieves better model performance. Compared to the latest dynamic pruning methods, although our runtime efficiency is slightly behind, we achieve better generalization performance, which is crucial for transfer learning scenarios. This minor efficiency compromise is acceptable.
> **3. Molecular datasets represent a more specialized field. It is crucial to test MolPeg with mainstream validation datasets to further demonstrate its effectiveness.**
Thanks for your constructive suggestion and helpful feedback. To further demonstrate MolPeg's effectiveness, we have provided the experimental results on CIFAR-10 and CIFAR-100 in `Table 3` of the rebuttal PDF file. For the experimental setup, we follow the settings of InfoBatch. However, since our method requires a pretrained model, we adopt MoCo strategy to pretrain the ResNet-18 on ImageNet, and then fine-tune it on the CIFAR dataset. As seen from the experimental results, **MolPeg still achieves the best results on most pruning ratios, further validating the effectiveness of our method across different domains.**
Regarding the reviewer's concern about the used datasets, we would like to reclaim our research motivation. As mentioned in `General Response`, our research is problem-driven and targets on ubiquitous and unsolved problem in AI for Chemistry community. Moreover, molecular tasks represent a complex and comprehensive task system. Unlike traditional DP tasks in CV, which focus on image classification, we validate MolPeg on **both classification and regression tasks**, involving **various molecular modalities** and **diverse task types**, which makes our evaluation even more comprehensive than traditional DP approaches. We will add the above experimental results to the appendix of the revised manuscript and we hope this response addresses your concern.
---
Rebuttal Comment 1.1:
Comment: Hello,
Thank you for your response, which I believe has addressed most of my questions. While I am not very familiar with "AI for Chemistry", I am leaning towards acceptance. I will finalize my ratings after further discussions with other reviewers and exchanging opinions with them.
Best regards,
Reviewer uU9Z
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer uU9Z,
We are delighted that most of your concerns have been addressed and truly appreciate your openness towards accepting our work.
We understand that the final rating will be made after further discussions with other reviewers, and we hope that our manuscript continues to stand out during this process. If there are any remaining questions or additional clarifications needed, please feel free to reach out to us. We are more than happy to provide further information.
Thank you again for your time and consideration.
Best regards,
Authors
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer uU9Z,
We hope this message finds you well.
We sincerely appreciate the time and effort you've dedicated to reviewing our submission and your thoughtful consideration of our rebuttal. Your feedback has been invaluable in helping us refine our work. We understand that you may still be discussing the paper with other reviewers. As the Author-Reviewer discussion phase is coming to a close, we would greatly appreciate **any updates you may have on the current ratings**. We would also appreciate any further comments and questions.
Thank you once again for your careful consideration!
Best regards,
Authors
---
Rebuttal 2:
Title: Correction of Typing Error in Complexity Analysis
Comment: Dear Reviewer uU9Z,
During our proofreading process, we discovered a typing error in the time complexity of the forward passes for the MolPeg method. It should be $ \mathcal{O}(2H_{fw}(T\delta N)) $, while the textual analysis in the initial rebuttal is correct. We sincerely apologize for this oversight and invite you to refer to the corrected time complexity table below:
| | MolPeg complexity | Other methods complexity |
| ------------------- | ------------------------------------------- | ----------------------------------------------------------- |
| **Forward passes** | $ \mathcal{O}(2H_{fw}(T\delta N)) $ | $ \mathcal{O}(H_{fw}((t + T\delta)N)) $ |
| | forwarding both online and reference models | forwarding online model in pre-scoring and training epochs |
| **Backward passes** | $ \mathcal{O}(H_{bw}(T\delta N)) $ | $ \mathcal{O}(H_{bw}((t + T\delta)N)) $ |
| | backwarding only the online model | backwarding parameters in both pre-scoring and training epochs |
We appreciate your understanding and attention to this correction.
Best regards,
Authors | Summary: By utilizing the pre-trained models, this paper presents a plug-and-play framework (MolPeg) to prune target data without source dataset. By maintaining two models with different updating paces during training, this paper introduces a novel scoring function to measure the informativeness of samples based on the loss discrepancy. The experimental results on 3 datasets shows the enhanced efficiency and superior generalization in transfer learning.
Strengths: - This paper is well-written. Each component of the design space is carefully explained and well-presented.
- This method is easy to follow can be adapted to other tasks.
- Extensive experiments are conducted to verify the effectiveness of this method.
Weaknesses: (Minor) Missing recent works: 1) static data pruning [1,2], 2) dynamic data pruning [3]
1. Active learning is a strong baseline for data subset selection. NeurIPS workshop, 2022
2. CCS: Coverage-centric Coreset Selection for High Pruning Rates. ICLR, 2023
3. Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt. ICML, 2022
Technical Quality: 4
Clarity: 4
Questions for Authors: See weakness.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **(Minor) Missing recent works: 1) static data pruning [1,2], 2) dynamic data pruning [3]**
Thanks for your recognition of our work and constructive suggestions. We apologize for omitting any relevant works. For the related works provided by the reviewer, we have added experimental results on the HIV and PCBA datasets for the first two works as additional baselines (AL and CCS). The experimental results are shown below:
| HIV Pruning ratio (%) | 90 | 80 | 70 | 60 | 40 | 20 |
| --------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
| AL | 80.7 | 81.1 | 82.9 | 84.0 | 84.8 | 85.1 |
| CCS | 81.5 | 82.3 | 83.8 | 84.2 | 85.0 | 85.2 |
| MolPeG | **83.7** | **84.8** | **85.3** | **85.5** | **86.0** | **85.6** |
| PCBA Pruning ratio (%) | 90 | 80 | 70 | 60 | 40 | 20 |
| ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
| AL | 15.2 | 19.2 | 20.9 | 22.5 | 25.2 | 26.2 |
| CCS | 15.5 | 19.9 | 21.5 | 23.5 | 25.9 | 26.3 |
| MolPeG | **20.7** | **23.9** | **25.6** | **26.4** | **26.8** | **27.0** |
It can be observed that MolPeG still achieves the best performance with significant improvements. We will add the above experimental results and empirical analysis to the appendix of the revised manuscript to enrich our research.
For the last related work, due to the time constraint during the rebuttal period, we are unable to reproduce their results. However, we will also include it in the discussion of related works in the revised version. Finally, we appreciate your helpful comments. If you have any other questions, please feel free to let us know.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I've read through other reviewers' feedback and responses as well. I have no more questions and will keep the score as it is.
Best regards,
Reviewer mAK4
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your constructive feedback and continuous support, which we believe has further improved our work.
Thank you again for your time and consideration!
Best wishes,
Authors | Summary: The paper introduces MolPeg, a molecular data pruning framework designed to enhance generalization when applying data pruning to pretrained models for molecular tasks. MolPeg uses two models with different updating rates to develop a new scoring function that assesses the informativeness of data based on loss discrepancies.
Strengths: 1. By maintaining dual models that focus on both source and target domains and introducing a novel scoring function that selects both easy and hard samples, MolPeg achieves efficient, lightweight data pruning without the need for retraining.
2. The paper provides the code, ensuring the reproducibility of the method. I will attempt to run the code in the coming weeks and may modify my review comments as necessary.
Weaknesses: 1. Although the paper claims to be pioneering in applying data pruning to pretrained models, the motivation may require further exploration. Specifically, how can it ensure that OOD samples crucial for each task are not pruned, which could potentially undermine the very purpose of the pretrained model?
2. The definition of what constitutes a 'hard case' is unclear. Are these cases task-specific? If so, there's a risk that pruning might eliminate hard cases essential for certain tasks, affecting the model's comprehensiveness and utility.
3. The approach might exacerbate the 'molecular cliff' problem, where slight changes in molecular structure lead to significant changes in activity, and such nuances could be lost with aggressive data pruning in pre-training.
4. The experimental section is limited to only three commonly used molecular datasets. Other equally important datasets, such as MUV, were not included in the comparisons.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: It is suggested that the discussion of limitations be moved to the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **1. Although the paper claims to be pioneering in applying data pruning to pretrained models, the motivation may require further exploration. Specifically, how can it ensure that OOD samples crucial for each task are not pruned, which could potentially undermine the very purpose of the pretrained model?**
Thanks for your valuable feedback and insightful concerns. Preserving crucial samples is indeed a challenge for static DP methods in OOD scenarios. However, we would like to clarify that we employ a **dynamic DP strategy**. We invite the reviewer to refer to the `General Response` describing our pruning setup and the pseudo-code in the `Appendix G` for a better understanding. Below, we explain in detail why OOD samples crucial for each task are not pruned:
- **Broader Receptive Field of Dynamic Pruning Strategy.** Unlike static DP, although we use a fixed proportion of samples in each iteration, we monitor the training dynamics of the entire dataset. Our method naturally avoids the issue of completely ignoring crucial samples, as almost all samples are used to varying degrees. Coordinating the use of samples in each iteration to achieve better generalization is the main contribution of MolPeg. In `Figure 1` of additional PDF file, we visualize the frequency of sample usage in the HIV datasets, showing that almost all samples are used for training even with aggressive pruning ratio, with crucial samples being used more frequently.
- **Crucial OOD samples are essentially our hard samples.** Note that it is a contentious issue to determine whether or not a sample is crucial. Even in supervised training with the full dataset, the importance of different samples vary across epochs. In our scenario, crucial OOD samples can be considered as the ones that the training struggles with, corresponding to those with gradients opposite to the EMA gradient, as theoretically analyzed in `Proposition 2`. Therefore, we actually regard crucial OOD samples as an important category (hard cases) to preserve, rather than filtering them out.
> **2. The definition of what constitutes a 'hard case' is unclear. Are these cases task-specific? If so, there's a risk that pruning might eliminate hard cases essential for certain tasks, affecting the model's comprehensiveness and utility.**
We apologize for any lack of clarity in our descriptions. Below, we provide clearer explanations for hard and easy cases to address reviewer's concern, and we will also include these in the revised main text for better understanding.
- **Hard cases** refer to samples that the model struggles with during optimization. These can also be understood as samples near the decision boundary of downstream tasks. As analyzed in lines 156-160 of our manuscript, these samples satisfy $\mathcal{L}(x,\theta_t)-\mathcal{L}(x,\xi_t)>0$,indicating that this epoch's optimization gave negative feedback compared to historical optimization.
- Conversely, **easy cases** are samples that can be optimized very smoothly, leading to a continuous loss reduction, satisfying $\mathcal{L}(x,\theta_t)-\mathcal{L}(x,\xi_t)<0$.
Moreover, we want to emphasize that neither hard nor easy cases are pre-defined; they are identified in real-time based on the loss discrepancy reflecting training dynamics, making them definitely task-specific since the loss value is closely related to the specific task. Regarding the reviewer's concern about the risk that eliminating hard cases, we believe this is similar to concern in `Weakness 1`. In our previous response, hard samples are actually crucial OOD samples. Therefore, **our method does not eliminate these cases but rather preserves them as informative ones.**
> **3. The approach might exacerbate the 'molecular cliff' problem, where slight changes in molecular structure lead to significant changes in activity, and such nuances could be lost with aggressive data pruning in pre-training.**
Thank you for your constructive feedback. We believe there might be some misunderstanding regarding our data pruning setup. As elaborated in the `General Response`, our data pruning is conducted on the downstream dataset, not during the pretraining stage.
Furthermore, we would like to address the reviewer's concerns from an experimental perspective. If our method exacerbated the molecular cliff phenomenon in the pruned training set, the test performance in a random split setup would be poor. This is because retaining only these special cases during training would lead to a significant distribution shift between the training and test sets. However, **the experimental results in `Section 5` demonstrate SOTA data pruning performance**, and these **experiments were all conducted under the random split as mentioned in line 232 of our manuscript**. Therefore, we believe that the issue the reviewer is concerned about does not arise with our method.
> **4. The experimental section is limited to only three commonly used molecular datasets. Other equally important datasets, such as MUV, were not included in the comparisons.**
Thanks for your valuable comments to further enrich our empirical analysis. We have supplemented our work with additional experiments on the MUV dataset, following the same experimental setup described in the manuscript. Please refer to `Table 1` in the PDF file in the `General Response` for the results. We observe that MolPeg still achieves state-of-the-art performance on the MUV dataset, further validating the effectiveness of our method. In the revised version, we will include these additional experimental results in the appendix to enrich the experimental validation.
> **5. It is suggested that the discussion of limitations be moved to the main text.**
Thank you for pointing out this problem. In the revised version, we will follow the reviewer's suggestion and move the limitation to the main text.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer zkte,
We wanted to gently remind you that the deadline for the discussion phase is approaching, and we would greatly appreciate it if you could take a moment to review our responses.
Your feedback is very valuable to us, and we are eager to hear your thoughts. If there are any additional concerns or points of clarification, we are more than happy to address them.
Thank you for your time and consideration.
Best wishes,
Authors
---
Rebuttal 2:
Comment: Hi Reviewer zkte,
Does the author’s response address your concerns? Please acknowledge that you have read the responses at your earliest convenience.
Best wishes,
AC | null | null | Rebuttal 1:
Rebuttal: ### **General Response**
We would like to thank all reviewers very much for their extensive reviews and constructive critiques. We are encouraged that reviewers find that our approach is efficient and lightweight (Reviewer zkte), that the experiments are comprehensive and verify the effectiveness (Reviewer uU9Z and mAK4), that the paper is well-organized with a clear presentation (Reviewers mAK4 and uU9Z).
However, we notice that the research motivation and data pruning setup have not been fully captured, leading to potential misunderstandings of some reviewers. Therefore, we would like to **restate our motivation and pruning setup** to address related concerns:
- **Motivation**: We point out the current need for efficient training in molecular modeling and attempt to improve training efficiency from the perspective of data pruning. Our exploration in the `Introduction` shows that **traditional DP methods fail** in the molecular domain due to significant distribution shifts, damaging the model generalization. Moreover, such distribution shift is inevitably due to the continual influx of novel molecular structures and functionalities in downstream tasks. In response to this phenomenon, we propose **the first source-free DP setup tailored for molecular domain**, which targets the main practical challenge in the field and represents the core motivation of our research.
- **Pruning Setup**: We want to emphasize that our approach involves a **dynamic data pruning on downstream datasets**, rather than static pruning on pretraining dataset. Unlike static pruning, which selects a fixed subset before training, our method dynamically selects the samples in each iteration. This means our approach **adapts in real-time based on training dynamics**, avoiding selection biases inherent in static DP methods and better catering to the specific task requirements.
### **Contents of PDF File**
Moreover, we have provided an extra **PDF file containing results and figures supporting our rebuttal arguments**. It should be noted that due to the time and page constraints during the rebuttal period, we were unable to supplement comparative experiments for all pruning ratios and all baselines. However, we selected 3 challenging aggressive pruning ratios and 8 competitive DP baselines for comparison. We believe these results could sufficiently validate the effectiveness of our method in the additional experiments. Below, we provide **a brief summary** of the content in the PDF for the reviewers' reference:
- **Figure 1** : We present the usage frequency statistics of samples when MolPeg is applied on the HIV dataset. The x-axis represents the number of times samples are used throughout the entire training process, and the y-axis represents the corresponding sample amounts. (for Reviewer `zkte` Weakness 1)
- **Table 1** : We have supplemented the pruning effectiveness of our method on the MUV dataset. (for Reviewer `zkte` Weakness 4)
- **Table 2** : We have supplemented the robustness of MolPeg's performance when using lower-quality pretrained models. These lower-quality pretrained models were obtained from the ZINC and QM9 datasets, which have more limited diversity and a smaller scale compared to the PCQM4Mv2 dataset used in the main text. (for Reviewer `uU9Z` Weakness 1)
- **Table 3** : We have supplemented MolPeg's performance on the classic image classification datasets—— CIFAR-10 and CIFAR-100. (for Reviewer `uU9Z` Weakness 3)
Finally, we appreciate all your helpful comments that strengthen the quality and clarity of our work. we hope the following responses address your concerns and we look forward to engaging in an active and productive discussion with the reviewers. If you have any other questions, please feel free to let us know.
Pdf: /pdf/620234f80729a5bbd3f72b0a5fddd4af009e0132.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Differentiable Quantum Computing for Large-scale Linear Control | Accept (poster) | Summary: This paper introduces an end-to-end quantum algorithm for linear quadratic control problem. The proposed quantum-assisted differentiable simulator is suitable for large-scale dynamical systems where the dimension of system state is huge. Sample complexity is also provided when apply quantum computation in such problem. Simulated results support its theory.
Strengths: The quantum application to linear-quadratic problem seems new and provides a lot computation benefit.
Weaknesses: This work only considers linear-quadratic control problem.
Technical Quality: 3
Clarity: 3
Questions for Authors: What are the optimality plot such as the plots shown in Figure 2(a) and 2(b) for Figure 2(c) when you increase the system dimension?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the novelty and computational advantage of the proposed quantum application.
Due to the page limit, we only consider the classic LQR problem in this paper. Nevertheless, our approach can be readily generalized to other optimal control problems, such as distributed LQR and nonlinear control problems. We plan to investigate these applications in future work.
We conducted further numerical experiments to understand how the optimality scales with problem size, see Figure 2 in the uploaded PDF file. Here, we scale the number of masses $g$ from 2 to 4, and the problem dimension scales accordingly from 2 to 8. We measure the optimality by the relative error found in both our method and the classical method. The relative $f(K)$ error is $|f(K) - f(K^*)| / f(K^*)$ and relative $J$ error is $|J - J^*| / J^*$.
The results are reported in the following table. As the dimension scales up, the relative errors increase, while our method consistently outperforms the classical optimization method. The plot has been added to the uploaded PDF file (see Figure 2).
| problem dimension | 2g = 2 | 2g = 6 | 2g = 8 |
|---------------|------------------|------------------|------------------|
| relative fK classical | 0.01177443 | 0.01369893 | 0.01562343 |
| relative fK quantum | 3.08486486e-08 | 3.54893993e-08 | 4.01301499e-08 |
| relative J classical | 0.01636588 | 0.01905698 | 0.02174809 |
| relative J quantum | 1.13132601e-07 | 1.28453806e-07 | 1.43775011e-07 |
We sincerely thank the reviewer for the detailed review. We plan to address the reviewer’s questions in the camera-ready version by elaborating on the possible generalization of the proposed methodology and including the new numerical experiments. If the reviewer finds this additional information helpful, we kindly request consideration for an increased preliminary rating for this submission. | Summary: The paper "Differentiable Quantum Computing for Large-scale Linear Control" introduces a quantum algorithm for linear-quadratic control problems, offering provable speedups. It utilizes a policy gradient method enhanced with a novel quantum subroutine for solving the matrix Lyapunov equation, leading to more accurate and robust gradient estimation than classical methods. The proposed algorithm achieves a super-quadratic speedup, making it the first end-to-end quantum application to linear control problems with demonstrable quantum advantages.
Strengths: Quantum Speedup: The algorithm achieves a super-quadratic speedup over classical methods, which is a significant advancement in the field of quantum computing for control problems.
Innovative Approach: It introduces a novel quantum-assisted differentiable simulator, enhancing the accuracy and robustness of gradient estimation.
Weaknesses: The experiments are insufficient. The proposed method has only been applied to simple abstract problems, but it would be beneficial to conduct experiments on practical problems in the form of LQR.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How does the proposed quantum algorithm handle cases where the sparsity assumptions on matrices A,B,Q,R do not hold?
2. Can the authors provide detailed runtime performance comparisons between the proposed quantum algorithm and state-of-the-art classical algorithms?
3. Comparison with Other Methods: How does the proposed method's stability and convergence rate compare with other existing quantum and classical approaches for solving linear-quadratic control problems?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The primary limitation of the proposed method is its dependency on the availability of quantum resources and the sparsity assumptions for the matrices involved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer recognizing the super-quadratic speedup achieved in this paper as a significant advancement in quantum computing for control problems.
Regarding the reviewer’s comment on the insufficient experiments as a main weakness of this paper, while the key contribution of this work is on the analytical formulation and theoretical advances, we conducted several experiments in the original submission.
We perform another experiment on a practical problem that can be formulated as LQR, see Figure 1 in the uploaded PDF. Here, we consider the aircraft flight control problem, specifically for pitch angle control. We adopt a linearized model of the aircraft around a steady flight condition. For a small aircraft, the pitch dynamics can be represented by the following state variables: pitch angle $\theta$ (rad) and pitch rate $q$ (rad/s). The control input is elevator deflection angle $\delta$ (rad). The state-space model can be represented as
$\dot{x} =Ax+Bu$, where $x = [\theta, q]^T$, $u = [\delta]$.
We set $A=[[0, 1], [0, -0.5]]^T$, $B=[0, 1]^T$, $Q=[[10, 0], [0, 1]]^T$, and $R=[0.1]$.
The plot of our optimization curve is available in the uploaded PDF (see Figure 1). Clearly, our method *converges faster than the classical method*.
Regarding the question on the sparsity assumption, *the sparsity assumptions are standard* across quantum algorithms research and are *necessary to efficiently load classical data into quantum registers*. Since our primary objective of this work is to establish the theoretical advantage of the proposed quantum algorithm, the sparsity assumptions are reasonable and at par with other works in this field. In practice, even when the sparsity assumptions do not hold, it is still possible to extend our algorithmic design to other types of quantum input models that allow dense data (e.g., [1, 2]). We will add this information to justify our sparse matrix input model and leave the adaptation of other input models in future work.
Since the proposed quantum algorithm utilizes advanced subroutines such as linear combination of unitaries (LCU), a detailed runtime analysis would require developing a customized compiler to estimate the gate count, which is clearly beyond the scope of the current paper. The **asymptotic analysis in the paper asserts a super-quadratic speedup against SOTA classical** algorithms, which paves the way toward a comprehensive resource analysis in future research.
Regarding the comparison with other methods: to our best knowledge, this paper is the **first to prove the robust convergence to the LQR solution in the literature of quantum computation**. We achieve a linear convergence rate, which is comparable with the classical SOTA, while our quantum advantage comes from the fact that we can **solve the matrix Lyapunov equation exponentially faster than any classical means**. Since our algorithm uses analytical gradient estimation, it **demonstrates stability in the SGD iterations** and achieves **faster convergence compared to classical model-free methods**, as illustrated in our numerical experiments (see figure 2 in the main paper).
We sincerely thank the reviewer for the detailed review. We plan to address the reviewer’s questions in the camera-ready version by elaborating on the technical assumptions and adding a numerical experiment with a practical background. If the reviewer finds this additional information helpful, we kindly request consideration for an increased preliminary rating for this submission.
[1] Wang and Wossnig (2018). A quantum algorithm for simulating non-sparse Hamiltonians. [arXiv:1803.08273](https://arxiv.org/abs/1803.08273)
[2] Liu and Lin (2023). Dense outputs from quantum simulations. [arXiv:2307.14441](https://arxiv.org/abs/2307.14441)
---
Rebuttal Comment 1.1:
Title: Thank you for the responses.
Comment: The author's response has somewhat addressed my concerns and questions. I will increase my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for your kind response.
Comment: Thank you for your thoughtful feedback and for taking the time to review our work. We sincrely appreciate your willingness to reconsider the score! | Summary: This proposes an end-to-end solution to the quantum-assisted LQR problem. Based on a policy gradient method, the proposed algorithm incorporates a quantum subroutine for solving the matrix Lyapunov equation, achieving a super-quadratic speedup.
Strengths: 1. To the best of my knowledge, this is the first end-to-end quantum application to linear control problems with provable quantum advantage.
2. This paper also provides numerical evidence to demonstrate the robustness and favorable convergence behavior of the method.
3. This paper is clearly written.
Weaknesses: 1. While the paper highlights the theoretical advantages of the quantum algorithm, implementing these algorithms on current quantum hardware might pose significant challenges. Today's quantum computers suffer from noise and have a limited number of qubits, which could affect the actual performance and reliability of the algorithm.
2. Although the paper claims a super-quadratic speedup, it's important to verify whether this speedup is practically achievable. Especially in large-scale industrial models, the real-world complexity and scalability of the algorithm are critical issues.
3. This paper is not always well written. For example, in Line 520, "The evolution of a quantum state can always described by a unitary operator".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the proposed quantum algorithm account for the current limitations of quantum hardware, such as noise and the limited number of qubits?
2. The paper introduces a novel quantum subroutine for solving the matrix Lyapunov equation. Could you elaborate on the specific technical innovations that this subroutine brings compared to existing quantum algorithms? What are the key theoretical breakthroughs that enable the claimed super-quadratic speedup?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of this work have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer recognizing our work as the first end-to-end quantum application to linear control problems with a provable quantum advantage.
One main concern in the review is related to the performance and reliability of the proposed quantum algorithm on noisy quantum hardware. The primary objective of this work is to establish the theoretical advantage of our algorithm, and the core subroutine (i.e., solving the matrix Lyapunov equation) requires a fault-tolerant quantum computer. That being said, our method still has reasonable potential for implementation on early fault-tolerant devices. In particular, the following aspects of our algorithm make it particularly robust given **mild noise** and a **limited number of logical qubits**:
1. The **hybrid quantum-classical nature** of our algorithm *mitigates noise* by limiting the runtime of each quantum processing routine via interspersed classical routines, which is advantageous on quantum devices with fewer qubits and shorter qubit coherence times.
2. The classical optimizer in our algorithm utilizes a **stochastic gradient descent (SGD)** algorithm, which *in principle converges for any unbiased gradient estimator*. The unbiasedness is a reasonable assumption given the independent nature of quantum noise. Therefore, it is expected that our algorithm is resistant to (sufficiently low amounts of) independent noise in gate execution and/or memory.
3. Our algorithm is truly **end-to-end** in the sense that the input and output are both classical data, which is often not the case for other proposed quantum algorithms for optimal control and RL. For example, most cited works in the “Quantum reinforcement learning” part under Section 2 do not support a comparable classical output.
The second question concerns the key theoretical innovation in the novel quantum subroutine for solving the matrix Lyapunov equation. This quantum subroutine is **exponentially faster** than its classical counterpart and is the key ingredient in achieving *super-quadratic quantum speedup*. The high-level idea of the quantum subroutine is to use the linear combination of unitaries (LCU) technique to compute the integral formula (12). To do so, we need an efficient implementation of the block-encoded operator $\exp(\mathcal{A}t)$, where $\mathcal{A}$ is Hurwitz. Note that no explicit upper bound on the largest singular value of $\mathcal{A}$ is known. Most existing quantum algorithms for this task (i.e., to compute $\exp(\mathcal{A}t)$) cannot be applied in our case, either because they have stronger assumptions on the matrix $\mathcal{A}$ (Quantum Singular Value Transformation (QSVT) [1] and Linear Combination of Hamiltonian Simulations (LCHS) [2]) or because they have worse asymptotic scaling (Taylor series expansion leads to an exponentially small success rate, and the so-called time-marching strategy [3] has super-linear scaling in $t$ (more precisely, $t^2$). Instead, we employ some ideas from Quantum EigenValue Transformation (QEVT) [4], a recent breakthrough in quantum algorithms, to construct this block-encoded operator with favorable asymptotic scaling (linear in $t$ and polylogarithmic in $1/\epsilon$). It is worth noting that our construction is not identical to the one in [4], as they do not have an explicit block-encoding form in the original paper.
We sincerely thank the reviewer for the detailed review. We plan to address the reviewer’s questions in the camera-ready version of this paper by further elaborating on our technical novelty and near-term feasibility. If the reviewer finds this additional information helpful, we kindly request consideration for an increased preliminary rating for this submission.
[1] Gilyén et al. (2018). Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetic. [arXiv:1806.01838](https://arxiv.org/abs/1806.01838)
[2] An et al. (2023). Linear combination of Hamiltonian simulation for nonunitary dynamics with optimal state preparation cost. [arXiv:2303.01029](https://arxiv.org/abs/2303.01029)
[3] Fang et al. (2022). Time-marching based quantum solvers for time-dependent linear differential equations. [arXiv:2208.06941](https://arxiv.org/abs/2208.06941)
[4] Low and Su. (2024). Quantum eigenvalue processing. [arXiv:2401.06240](https://arxiv.org/abs/2401.06240)
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal and for addressing the concerns raised. I appreciate the insights provided, particularly regarding the theoretical contributions and potential robustness of your proposed quantum algorithm.
Your work establishes a good theoretical foundation, and the practical realization and scalability of these methods in real-world scenarios are important aspects to consider moving forward. I will maintain my current assessment of the submission.
---
Reply to Comment 1.1.1:
Title: Thanks for your comment!
Comment: Thank you for your thoughtful review and for recognizing the theoretical contributions of our work. We appreciate your insights on the practical realization and scalability of our proposed methods. Your feedback has been important in refining our approach. | Summary: This paper studies the problem of applying quantum computing to linear quadratic regulator (LQR) control. The approach is based on an efficient quantum estimation of the policy gradient. When the dimension n of the state space is large, the proposed approach can achieve orders of magnitude improvement on the time complexity to find the optimal controller compared to existing policy gradient methods.
Strengths: I am not familiar with quantum computing. Since the LQR sample complexity has received much attention recently, this work should be interesting for the learning theory community if the claim about time complexity improvement is correct.
Weaknesses: Since this work proposes a model-based approach and assumes access to the exact model, including A, B, Q, and R (Algorithm 1), I wonder why the authors compare with the model-free approach in [44], which does not assume knowledge of the model and uses two-point gradient estimation. I don’t think this is a fair comparison.
Besides, I do not understand why the application of the proposed quantum approach is limited to the classic LQR problem. If the real contribution is the first quantum approach to solve the Lyapunov equation (11) efficiently, I think the proposed method can be applied to other important problems like stability analysis. I hope the authors can clarify the most general statement of the main contribution or what limits the application to LQR.
Technical Quality: 2
Clarity: 3
Questions for Authors: Besides my comments in the weakness part, I also hope the authors can clarify if the proposed method is robust to estimation errors in A and B if we estimate them from samples (see [Dean, Sarah, et al., 2020]).
[Dean, Sarah, et al., 2020] Dean, Sarah, et al. "On the sample complexity of the linear quadratic regulator." Foundations of Computational Mathematics 20.4 (2020): 633-679.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: I do not see any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for confirming that our work is of interest to the learning theory community, given that the technical part is sound. In what follows, we address the concerns regarding the weaknesses and technical details mentioned in the review.
In Table 1, the “(Model-based) policy gradient [44]” item summarizes the convergence results proven in [Theorem 1, 44]. This result is proven under the assumption of a “known model,” indicating that the policy gradient can be computed explicitly. It focuses on the convergence rate of an exact gradient descent method and is parallel to the “model-free” approach discussed later in the same paper. Given context in [44], since our algorithm adopts a similar policy gradient idea with quantum-assisted gradient estimation, we believe that it is a fair comparison to mention the (model-based) policy gradient result in Table 1. We apologize for the confusion and will provide expanded elaboration in the camera-ready version.
We agree with the reviewer that our quantum subroutine for the matrix Lyapunov equation is of *independent interest* and its **application need NOT be limited to the LQR problem**. Our quantum algorithm is based on the integral formula (12),
$$X^* = \int^\infty_0 e^{\mathcal{A}t} \Omega e^{\mathcal{A}^T t},$$
which holds only when the matrix $\mathcal{A}$ is Hurwitz stable. If our quantum algorithm does not produce a correct solution to the Lyapunov equation in a designated time, we may conclude that the matrix $\mathcal{A}$ is non-stable. However, the output of our quantum algorithm is a “block-encoded matrix,” a quantum circuit that cannot be efficiently simulated by classical means. This means that checking if the output “solution” satisfies the Lyapunov equation is quite non-trivial and requires additional algorithmic design, as opposed to the classical case where we can do simple matrix multiplication. For the sake of conciseness and self-consistency of this paper, we mainly focus on the end-to-end solution of the LQR problem. We will clarify our main technical contribution by discussing the potential applications of our quantum subroutine in the camera-ready version and leave the technical details for future work.
Regarding the estimation error on $A$ and $B$: if the matrix $A$ and $B$ are estimated using the independent data collection scheme (as indicated in [Dean, Sarah, et al., 2020]) and these estimates allow efficient quantum input procedures (as described in Assumption 1 in our manuscript), we believe that our method is robust to the estimation error. This is because the relative error in the LQR objective function can be characterized similarly to how it is presented in Proposition 1.2 in the reference paper [Dean, Sarah, et al., 2020]. We thank the reviewer for making this insightful and interesting observation. We will cite this paper in the camera-ready version.
We sincerely thank the reviewer for the detailed review. We plan to address the reviewer’s questions in the camera-ready version by clarifying the comparison in Table 1 and elaborating on the scope of potential applications. If the reviewer finds this additional information helpful, we would be grateful if they could consider an increased preliminary rating for this submission. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their invaluable comments on our submission. We are particularly grateful for the reviewers' recognition of the novelty of our paper as the *first end-to-end quantum application to optimal control* (Reviewers v7Mi, 7tHw), the acknowledgment of the *super-quadratic speedup* over classical methods as a significant advancement (Reviewer dSDS), and the overall *interest our work generates within the broader learning theory community* (Reviewer oizh).
In this message, we would like to address a few questions raised by the reviewers regarding the technical contributions, algorithm applicability, and numerical experiments.
The proposed algorithm is truly **end-to-end**, with classical input and classical output, and achieves a **super-quadratic speedup** against SOTA classical methods for the LQR problem. Both aspects make it a significant result in quantum computing. Most existing quantum algorithms for classical optimization and control problems either return a quantum state as output, requiring an exponential overhead to convert to classically readable data, or only achieve a quadratic speedup by leveraging a Grover-type quantum algorithm. In contrast, our super-quadratic speedup comes from an exponentially faster quantum subroutine for solving the matrix Lyapunov equation. To the best of our knowledge, this matrix Lyapunov equation solver is a novel quantum algorithm and is of independent interest. As Reviewer oizh mentioned, this subroutine can be applied to other problems, such as stability analysis. The key technical difficulty behind this new quantum subroutine is that it requires efficiently computing the matrix exponential of a Hurwitz matrix, for which most existing quantum algorithms are not directly applicable. We employ ideas from a recent result known as Quantum EigenValue Transformation (QEVT) to resolve this difficulty.
Since the primary goal of this paper is to investigate the theoretical advantage of the proposed quantum algorithm, we adopt the sparse-matrix input model, which is considered one of **the most standard input models for quantum algorithm design**. The hybrid quantum-classical nature of our algorithm limits the runtime of each quantum processing routine by interspersing classical routines, which is advantageous on near- and mid-term quantum devices with fewer qubits and shorter qubit coherence times. While estimating the actual gate count is beyond the scope of this paper, the hybrid quantum-classical approach makes our algorithm more feasible than many other algorithms without a classical component.
Additionally, we add two more numerical experiments to show the scalability and practical relevance of our approach. We observe that (1) our method **converges faster** than classical methods in an **aircraft flight control** problem, and (2) our method **consistently outperforms classical methods** as the **problem dimension increases**. The figures are included in the uploaded PDF file. We will add these results in the camera-ready version of this submission.
We appreciate the reviewers' informative feedback and look forward to further discussions on specific technical or conceptual questions. If this additional information proves helpful, we would be thankful if the reviewers could consider a higher preliminary rating for our submission.
Pdf: /pdf/7791117e15a0933f0b08b0373a1125fbc2125abd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Consistency of Neural Causal Partial Identification | Accept (poster) | Summary: This paper studies theoretically the partial identification capabilities of neural causal models (NCMs), that is, the extent to which this type of models can approximate the interval on which a certain causal query falls on, $\theta(\mathcal{M}) \in [\underline{F}, \overline{F}]$. To this end, the authors develop a lot of results showing that under ideal scenarios with oracle knowledge on the exogenous distributions, NCMs can approximate any SCM if the proper architecture and regularization is used. Interestingly, the authors show that regularization is key in these settings, as otherwise the observational likelihood can be modelled arbitrarily well, while the interventional distribution do not look alike. It is worth-noting that the theory developed consider general scenarios with confounders and mixed-type data.
Strengths: - **S1.** This work addresses an intellectually interesting question of whether NCMs can properly perform causal partial identification.
- **S2.** The theory developed looks general, sound, and it makes sense on an intuitive level (disclaimer: I have _not_ looked into the proofs).
- **S3.** The work properly studies mixed-type data, which I have rarely seen done.
- **S4.** The fact that NCMs _need_ a form of regularization to properly approximate the SCMs is really interesting.
Weaknesses: - **W1.** My personal main concern is that I do not immediately see any practical implication of the theory developed here. What I mean is that, while I find it interesting, if I have understood it correctly it works under the assumption that we know the exogenous distribution of the ground-truth model, which we never know, and which we could be arbitrarily different from whatever we model (we can always change the exogenous distribution and push the transformation to the generator $f$).
- **W2.** While trying to be general, I similarly feel that some of the assumptions can be quite restrictive in practice. E.g., global Lipschitz-continuity is quite a constraining assumption from my point of view.
- **W3.** I know it is a dense theory paper, but the presentation leaves a lot to be desired. There is a lot of similar notation, non-standard definitions (e.g. SCMs), and a lack of intuition/examples that makes the results hard to parse. The paper is simply not accessible to a large percentage of the community.
- **W4.** The experimental section is extremely short, and it would require of more settings, an ablation study of the different components of the theory, and a proper analysis of the results.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have no specific questions, but I want to make clear that a dense paper like this would require many more hours for me to fully understand it and I may have missed some important points. I will reflect this in my confidence score.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are **not** discussed. In the checklist, it is said to be discussed on the Conclusion section, which does not exist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Below is our response. We hope our clarification can solve your problem.
***1. The method only works under the assumption that we know the exogenous distribution of the ground-truth model.***
We thank the reviewer for their advice. However, we would like to kindly elaborate on a misunderstanding about a key criticism and we will make sure to make this point even more clearly described in the paper: Our algorithm does not need to know the ground-truth exogenous distribution of the latent factors and this is a key development of our work. We only need to assume that these unknown exogenous distributions satisfy some mild regularity assumptions (Assumption 4,5). This is a key point of partial identification. Without knowing the exact latent distribution, we search over all NCMs that induce a distribution similar to the one observed empirically (measured by a metric distance) and take the maximum and minimum of the causal quantity we are interested in subject to this metric distance constraint.
In our algorithm, we push forward uniform and Gumbel variables using neural networks to simulate exogenous distributions that satisfy Assumption 4 (and may Assumption 5, depending on which architecture to use) and see the maximum and minimum of the causal quantity of causal models whose latent distributions fall in this class (Problem (7) in the paper). A key theoretical insight of our work is that by passing these simple distributions through appropriate neural network architectures, one can represent arbitrary latent variable distributions and hence “fit” any ground truth latent distribution that satisfies the regularity assumptions. Thus arbitrary SCMs, with general unknown latent factor distributions can be represented through NCMs and when fitting to data, one does not need to know the actual distribution of the latent factors. For this reason, our theoretical results have a strong implication for practice and essentially analyze the methodology that prior work had extensively studied empirically.
***2. While trying to be general, some of the assumptions can be quite restrictive in practice. E.g., global Lipschitz-continuity is quite a constraining assumption.***
Thanks for your feedback. We work with compactly supported distributions (Assumption 3), so a local Lipschitz function is automatically global. While certainly not fully general, the Lipshitz continuity assumption is a standard assumption in the literature on the representation theory of neural network architectures and most neural network architectures (e.g. ReLU) are in fact globally Lipschitz. Moreover, even if there is no confounding in the SCM, to learn the SCM, we need to learn the structural functions $f_i$. To avoid impossibility results, like the No-free-lunch theorem, one needs to make some structural assumptions on the function spaces that the functions $f_i$ lie in. Lipschitz function spaces are a natural choice when providing theoretical statistical consistency and representativeness theorems [3,4,5].
***3. There is a lot of similar notation, non-standard definitions (e.g. SCMs), and a lack of intuition/examples that makes the results hard to parse.***
Thank you for your feedback. Due to space limitations, we are unable to present all the details in the main body. However, we provide some examples in the appendix. For the definition of SCM, we follow the definition of Xia et al.[1]. The only difference with the traditional SCM literature is that we allow the latent “noise” variables to enter more than 2 observed nodes. This convention was also followed in Xia et al.[1]. The causal graph induced by an SCM in our definition is a kind of Acyclic Directed Mixed Graph obtained by latent projection [2]. We have provided one example in Appendix A to illustrate our definition.
***4. The experimental section is extremely short, and it would require more settings and a proper analysis of the results.***
Thank you for your comment. As we point out in the paper, the methodology that we analyze has been proposed and analyzed experimentally in prior work. The main contribution of our paper is to provide a theoretical justification of this methodology in general environments. Please see item 1 of our response to the reviewer zYjd for details.
***5. Limitations are not discussed and the conclusion is missing.***
Thank you for pointing out! We did not include the conclusion part due to space limitations, which can be easily addressed. We will add a ‘Conclusion and Limitations’ section in the camera-ready version, which is shown in the response of reviewer 8521 item 4.
[1] Kevin Xia, Kai-Zhan Lee, Yoshua Bengio, and Elias Bareinboim. The causal-neural connection: Expressiveness, learnability, and inference, October 2022.
[2] Jin Tian and Judea Pearl. On the testable implications of causal models with hidden variables, December 2002.
[3] Yarotsky, Dmitry. "Optimal approximation of continuous functions by very deep ReLU networks." Conference on learning theory. PMLR, 2018.
[4] Yarotsky, Dmitry. "Error bounds for approximations with deep ReLU networks." Neural networks 94 (2017): 103-114.
[5]Wainwright, Martin J. High-dimensional statistics: A non-asymptotic viewpoint. Vol. 48. Cambridge university press, 2019.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their kind and detailed response, it should be clear by now that I am an outsider on this particular problem, and the initial barrier is a bit too tough to pass in the reviewing period. Clearly, other reviewers were much more positive than I initially was.
With that said, I understand the practical implications of this work, and I applaud it. However, I still have some questions for which I would appreciate an answer (I apologize I don't have currently the time to go through the proofs in the appendix, yet I'll try to go through the main paper one more time).
To my understanding, one key aspect of this work is splitting the estimation error in two terms, the error in the exogenous distribution and that of the causal mechanisms. From there, one can use the theory of NN approximation to estimate the error committed in each part, depending on depth and width. To achieve this result, it is necessary to assume a specific $\mathcal{M}^*$ in its canonical form, which would have a specific Lipschitz constant $L^*$ that the proofs should somehow use.
What I don't understand is that, without assuming that one knows the exogenous distribution of $\mathcal{M}^*$, it is possible to find alternative canonical representations by applying arbitrary transformations $\phi$ to the canonical form, i.e., having $(\phi \circ P_U, \phi^{-1} \circ f)$ instead of $(P_U, f)$, with arbitrary Lipschitz constants for the target SCM. As far as I can see, the definition of the canonical form does allow these cases. How does your work rule them out? I guess some of the assumptions do it?
Thanks in advance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your question.
We want to clarify that the approximation theorems (Theorem 1 and Corollary 1) are tools we use to prove the consistency theorem (Theorem 4), which is the central result of our paper. These approximation theorems state that given an SCM (with known latent distribution and structure functions) that satisfies some assumptions, we can approximate it using NCMs. However, in the consistency theorem, *we do not assume that we know the latent distribution or the structure functions (nor does the consistency theorem claim that we recovered the true latent distribution; just that the true latent distribution is part of the “search space”)*. The consistency theorem proved the consistency of the bounds for the target estimand.
As we mentioned before, the goal of partial identification is to determine bounds for a causal quantity across all SCMs that have the same observation distribution as the ground-truth model. Since it is difficult to search over all SCMs, we argue that it is sufficient—and more practical—to search over all NCMs. The approximation theorems are used to prove that the set of NCMs is ‘close’ to the set of SCMs so that the errors caused by this substitution in the partial identification problem can be controlled.
Therefore, the short answer to your question is that we cannot rule out alternative causal models that result from transformations, as long as they satisfy all the assumptions (particularly Assumptions 3 and 4). Since we only observe a subset of the variables, multiple causal models could potentially lead to the same observational distribution. Without additional information about the true underlying model, it is often difficult to uniquely identify a single causal model from observations alone.
This is a typical case in causal inference. Even when the target estimand is point identified (e.g. ATE with no unobserved confounding), and hence our bounds converge to a singleton, that does not mean that the latent distribution of the structural causal model nor the structural functions are identifiable, due to the ambiguity you mention. Still, in such unique identification cases, all such transformations yield the same target estimand.
We hope this clarification helps you better understand our work, and we are happy to address any further questions you might have. | Summary: This paper provides a novel perspective and solid contribution to the case where continuous and categorical variables both exist.
Strengths: The presentation and organization are concise and clear; the contribution is solid.
Weaknesses: The assumption is somehow strong (e.g., assumption 2).
Technical Quality: 3
Clarity: 3
Questions for Authors: How do we argue the importance of solving the problem of ``the approximation error and consistency of the optimization-based approach to partial identification via NCMs''?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions! Below is our response. We hope our clarification can solve your problem.
***1. The assumption is somehow strong (e.g., assumption 2).***
Thank you for your feedback. Assumption 2 is a relatively new assumption about the data generation process of categorical variables, but which again we view as a mild assumption that is basically an analog of the Lipschitz structural equation function assumption that we place on continuous variables. This assumption basically states that if one conditions on all the latent factors that are impacting a given variable and which also impact other variables (i.e. once one conditions on the latent confounders for this variable), then the probability that this categorical variable takes any given value is a Lipschitz function of this confounding latent factors. We make this intuition formal using the following formal convention. The classical definition of SCM assumes that the structural equations have the form $ V_i = f_i(\text{Pa}(V_i),U_i) $, where $\text{Pa}(V_i)$ is the set of parents of node $V_i$. However, if we follow this definition and $V_i$ is categorical, this means $f_i$ can only take finitely many values. If one of the inputs is a continuous variable, the only way for $f_i$ to be continuous (which is a quite mild assumption) is that $f_i$ is a constant function, which is trivial. To avoid this situation, we assume that the latent variables have two parts, the confounding part $U$ and the independent noises part $G$. The categorical variable $V_i$ is generated by the following process. First, we calculate the propensity function $f(\text{Pa}(V_i),U_i) \in \Delta =\\{x\in\mathbb{R}^d:x_i \geq 0, \sum x_i = 1\\}$. Then, we generate a categorical variable according to this probability using the independent noise $G_i \in G$. In this way, we only need to approximate the propensity in the learning process.
We also want to emphasize that the Gumbel variables in assumption 2 are chosen for convenience to generate categorical distributions in the second step mentioned above. One can replace the Gumbel variables with any random variables that can generate categorical distributions. Therefore, this assumption is basically a natural analog of the “Lipschitz structural equation” assumption, to the data generation process of categorical variables.
***2. How do we argue the importance of solving the problem of “the approximation error and consistency of the optimization-based approach to partial identification via NCMs”?***
Thank you for your question. As we mentioned in our paper, identifying causal quantities from observational data is an important problem. However, in the presence of unobserved confounding, it is usually impossible to accurately identify many causal quantities. In that situation, partial identification gives a remedy to the problem.
There has been rich literature on partial identification in causal inference and the optimization-based approach has demonstrated good empirical performance [1,2,3,5]. However, in the general continuous variables setting (and the mixed setting), previous methods [1,2,3] do not have any theoretical result on the soundness of their methods. Moreover, the NCM approach is the only generic methodology for partial identification with general variables. Prior general methodologies for automated Partial Identification [5], that don’t go through the NCM method, are only available for discrete variables and, as our experiments showcase, they translate poorly to the continuous variable setting (if one invokes discretization) and don’t have natural continuous counterparts that one could analyze theoretically. Thus our paper fills this gap: provides a theoretical justification of the only existing candidate for an automated partial identification procedure for general variables.
[1]Kocaoglu, Murat, et al. "Causalgan: Learning causal implicit generative models with adversarial training." arXiv preprint arXiv:1709.02023 (2017).
[2]Vahid Balazadeh, Vasilis Syrgkanis, and Rahul G. Krishnan. Partial identification of treatment effects with implicit generative models, October 2022
[3]Kirtan Padh, Jakob Zeitler, David Watson, Matt Kusner, Ricardo Silva, and Niki Kilbertus. Stochastic causal programming for bounding treatment effects.In Conference on Causal Learning and Reasoning pages 142–176. PMLR, 2023
[4] Florian Gunsilius. A path-sampling method to partially identify causal effects in instrumental variable models, June 2020
[5] Guilherme Duarte, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. An automated approach to causal inference in discrete settings, September 2021
---
Rebuttal 2:
Title: Thanks
Comment: It is a concise and high-quality response. Thank you! I raise my score to 6.
Besides, in fact, I'm quite familiar with Partial Identification, one of my main research topics. I also share two nice new papers and recommend you cite them as follows.
Of course, I acknowledge that this literature is somehow different from your settings, but they all make meaningful contributions to general PI. For instance, after my careful proofreading, Literature [1] 's main IFF theorem holds in discrete/continuous/mixed cases.
Finally, my last question is: Why is the extension from the discrete case to the continuous case highly non-trivial? For instance, when we get the continuous variable, why not conduct naive discretization, and the corresponding result may not change significantly? If this holds true, it will not be interesting. So, I wonder about some more insightful reasons/counterexamples.
Thank you! I will continuously consider raising my score.
[1] tight partial identification of causal effects with marginal distribution of unmeasured confounders. (Zhang, ICML2024 spotlight)
[2] Model-Agnostic Covariate-Assisted INference on Partially Identified Causal effects (Wen et al)
---
Rebuttal Comment 2.1:
Comment: Thank you so much for your feedback and for sharing the recent literature with us. We sincerely appreciate your contributions to improving our work.
Regarding the two papers you mentioned, they indeed focus on a specific type of causal graph, which differs from the general causal graph setting our algorithm addresses. However, we agree that these papers offer valuable insights for the partial identification literature. We will cite them in the camera-ready version of our paper.
As for your question about discretization, we appreciate the opportunity to clarify this point. As we mention in the paper, for a general discrete SCM, the Partial Identification (PI) problem becomes a Polynomial Programming (PP) problem [1], which is NP-hard in general. While discretizing continuous data and solving the PP problem to obtain bounds is possible, the problem's size grows *exponentially* with the cardinality of the support. This implies that the finer the discretization of the continuous data, the larger and more computationally expensive the resulting PP problem becomes.
In our experiments, we found that the PP problem quickly becomes intractable when we increase the support's cardinality of continuous variables. For instance, in a continuous IV setting, we compared our algorithm with Autobounds [1], the state-of-the-art algorithm for solving discrete PI problems, by discretizing all continuous variables such that their support's cardinality is 8 (details are in Section D.2). This produces a PP problem of size approximately $2^{14}$. However, even we solved such a large problem, the bound obtained by this approach was not as tight as the one obtained using the NCM approach. While finer discretization could potentially improve the bound, the resulting PP problem becomes so large that we were unable to obtain a solution after running the Autobounds algorithm for over a day. Therefore, we believe it is crucial to generalize the NCM approach to include continuous (and mixed) variables, as it proves to be more efficient and powerful compared to the discretization approach in these settings.
Thank you again for your detailed review and valuable questions. We are glad to take any further questions.
[1] Guilherme Duarte, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. An automated approach to causal inference in discrete settings, September 2021 | Summary: The paper extends neural causal models (NCMs) to the continuous and mixed-type variables. NCMs is a neural-networks based tool to perform an automated point/partial identification and estimation of causal queries. The authors provided new theoretic results (1) on how to construct a canonical representation of an SCM, (2) on how to approximate it with sufficiently expressive Lipshitz neural networks, and (3) on additional assumptions to perform the consistent identification/estimation of an average treatment effect (ATE) from an arbitrary causal digram and an observational distribution. The paper provides experimental results for the partial identification of the ATE in the discrete and continuous outcomes settings.
Strengths: To the best of my knowledge, this paper is the first one to advance neural causal models (NCMs) to the mixed-type variables setting. NCMs are a universal tool for general automated partial identification and estimation of causal quantities (a very important problem in causal inference and treatment effect estimation). The theory provided in the paper is very general and extends over simple linear SCMs. The paper is clearly written (as far as a massive theoretic contribution allows) and well-structured. The authors rigorous proofs for all the theoretic statements.
Weaknesses: I didn’t find any major or minor weaknesses in the paper. Yet, I have several suggestions on how to improve the paper’s understandability for the general public of the NeurIPS conference:
1. I encourage the authors to provide more examples of how to construct canonical representations for some simple SCMs and then the corresponding neural architectures of the NCMs.
2. I would provide a more precise explanation of Figures 1 and 5, e.g., the number of layers in the yellow and blue blocks. Also, multiple notation elements could potentially be added to the Figures, e.g., the number of connected components in the latent space or the number of confounded components.
3. I also encourage the authors to provide the code of the proposed method.
4. Seems like the conclusion is missing in the main part of the text.
I am willing to increase my score if the authors implement the above-mentioned suggestions.
Technical Quality: 4
Clarity: 3
Questions for Authors: I appreciate the provided continuous IV experiment. I wonder, what is the performance of the method in other confounded ATE settings where the ground-truth is available, e.g., a no-assumptions bound for the ATE with hidden confounding?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have fully stated the limitations of their theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the helpful suggestions, which make our paper clearer and more readable. Below are the changes we make.
***1. Provide more examples of how to construct canonical representations for some simple SCMs and then the corresponding neural architectures of the NCMs.***
Thanks for the suggestion. We will provide more examples in the appendix in the later version. Specifically, we plan to use the example in Appendix A as an example.
The causal graph of this example is shown in Figure 2b in our paper. Following the construction in Proposition 4, we use one latent variable for each $C^2$ component. As we explain in Appendix A, this causal model has three $C^2$ components, $ \\{V_1,V_2,V_3\\}, \\{V_3,V_4\\}, \\{V_4,V_5\\} $. In the canonical representation, exactly the latent variables enter their corresponding $C^2$ component. The canonical model is shown in Figure 2(a) in the global response.
Now, we show how to construct the NCM architecture from a canonical representation. As we mentioned in Section 3, we approximate the latent distribution by pushing forward uniform and Gumbel variables. The structure equations of the NCM are
\begin{align}
&V_1 = f^{\theta_1}_1(V_2,g_1^{\theta_1}(Z_1)),\\\\
&V_2 = f^{\theta_2}_2(g_1^{\theta_1}(Z_1)),\\\\
&V_3 = f^{\theta_2}_3(V_1,V_2,g_1^{\theta_1}(Z_1),g_2^{\theta_1}(Z_2)), \\\\
&V_4 = f_4^{\theta_4}(V_3.g_2^{\theta_1}(Z_2)),\\\\
&V_5 = f_5^{\theta_5}(V_4,g_2^{\theta_3}(Z_3)),
\end{align}
where $f_i^{\theta_i}, g_j^{\theta_j}$ are neural networks, $Z_i$ are join distribution of independent uniform and Gumbel variables, and $g_j^{\theta_j}$ has the special architecture described in Section 3.1 and Section 3.2. Figure 2(b) in the global response shows the architecture of the NCM. Each in-edge represents an input of a neural net.
***2. I would provide a more precise explanation of Figures 1 and 5, e.g., the number of layers in the yellow and blue blocks. Also, multiple notation elements could potentially be added to the Figures, e.g., the number of connected components in the latent space or the number of confounded components.***
Thank you for your helpful advice. We have modified figure 1 and 5 according to your advice. We include the figures after modification in the global response PDF. Figure 5 is changed similarly and captions about the number of layers will be added too. We will implement these changes in the later version.
***3. I also encourage the authors to provide the code of the proposed method.***
Thanks for the suggestion! We will include the link to the code in the camera-ready version if our paper gets accepted.
***4. The conclusion is missing in the main part of the text.***
Thank you for pointing out. We did not include it to avoid some repetition given space constraints, but we will add it for the camera-ready version. We will add the following ‘Conclusion and Limitations’ section to the paper.
In this paper, we provide theoretical justification for using NCMs for partial identification. We show that NCMs can be used to represent SCMs with complex unknown latent distributions under mild assumptions and prove the asymptotic consistency of the max/min estimator for partial identification of causal effects in general settings with both discrete and continuous variables. Our results also provide guidelines on the practical implementation of this method and on what hyperparameters are important, as well as recommendations on values that these hyperparameters should take for the consistency of the method. These practical guidelines were validated with a small set of targeted experiments, which also showcase superior performance of the neural-causal approach as compared to a prior main contender approach from econometrics and statistics, that involves discretization and polynomial programming.
An obvious next step in the theoretical foundation of neural-causal models is providing finite sample guarantees for this method, which requires substantial further theoretical developments in the understanding of the geometry of the optimization program that defines the bounds on the causal effect of interest. We take a first step in that direction for the special case, when there are no unobserved confounders and view the general case as an exciting avenue for future work.
***5. What is the performance of the method in other confounded ATE settings where the ground-truth is available, e.g., a no-assumptions bound for the ATE with hidden confounding?***
Thank you for asking. We do an extra experiment on the leaky mediation setting [1]. The structure equations of this causal model are
\begin{align*}
T &= C + U_T, \\\\
X &= T + U_X + U, \\\\
Y &= 2X + U +C +U_Y, \\\\
\end{align*}
where $C,U,U_T,U_X,U_Y \sim \text{Unif}(-1,1)$ are latent variables and $T,X,Y$ are observed variables. Like the settings in our paper, we compare our algorithm with the Autobounds package. The true ATE of this model is 2. The following results are the averages of 10 experiments for each algorithm. The bounds obtained by NCM is [-4.19,4.12], while the bound obtained by Autobounds is [-10.97, 12.56].
The bounds obtained by both algorithms seem non-informative because for this partial identification problem, the ATE can be an arbitrary real number. This is because the following causal model, with $\alpha$ taking over all real numbers, has the same observation distribution as the ground-truth model.
\begin{align*}
T &= C + U_T, \\\\
X &= T + U_X + U, \\\\
Y &= \alpha X + (3 - \alpha)(U +C) + U_Y + (2 - \alpha)U_X, \\\\
\end{align*}
The results of the NCM approach correctly reflect this fact.
[1] Padh, Kirtan, et al. "Stochastic causal programming for bounding treatment effects." Conference on Causal Learning and Reasoning. PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications.
I still have an important concern regarding the no-assumptions bound experiments (5).
For the outcome $Y \in [a, b]$, the no-assumption (Manski) bounds on the ATE always have the width of $b - a$ [1]. In the example, provided by the authors, $Y \in [-11, 11]$ and, thus, the ground-truth width is $b - a = 22$. Therefore, it seems that the method proposed by the authors yields invalid bounds with a width of $8.32$ (whereas the Autobounds package provides a more valid width of $23.53$). Could the authors explain the source of the invalidity?
**References**:
- [1] Manski, Charles F. "Nonparametric bounds on treatment effects." The American Economic Review 80.2 (1990): 319-323.
---
Reply to Comment 1.1.1:
Comment: Thank you for your question.
We would like to point out that the Manski bound [1] is derived specifically for binary treatment settings, whereas in our example, the treatment is continuous. Besides, the causal graph of [1] is different from the leaky mediation graph. For these reasons, it is not immediately clear whether the same bound can be applied directly in our context. A more comparable setting is the binary IV example in our paper, where both algorithms yield similar bounds. In the binary IV example, we use the algorithm from [2] to verify that the bounds we get are correct.
Additionally, we impose Lipschitz constraints on the structure functions when solving the PI problem, which may result in tighter bounds. However, the analytic characterization of the optimal bounds under Lipschitz constraints has not been established and seems hard to establish in analytic form.
We are happy to address any further questions you may have.
[1] Manski, Charles F. "Nonparametric bounds on treatment effects." The American Economic Review 80.2 (1990): 319-323.
[2] Balke, Alexander, and Judea Pearl. "Bounds on treatment effects from studies with imperfect compliance." Journal of the American statistical Association 92.439 (1997): 1171-1176. | Summary: This paper develop consistency results for partial identification via neural causal model with both continuous and categorical variables. Their results shed light on the impact of the neural network architecture and Lipschitz regularization during training. The resulting method can be trained via gradient-based optimization algorithms, and is validated on synthetic data.
Strengths: - The paper is clearly written. The assumptions used are clearly stated.
- The identification result is useful and a good complement to existing ones, by making them more general.
- The identification result and algorithm are technically sound.
Weaknesses: - The empirical validation of the algorithm/result is not extensive, but I understand that the goal of the work is to establish identification results and a thorough empirical study might not be necessary.
- The paper would benefit from giving some proof sketches (or a brief overview of the proof strategy) in the main paper.
- Many of the assumptions and results are not well explained, making it not straightforward to understand the assumptions and implications of the results.
Technical Quality: 3
Clarity: 2
Questions for Authors: - A conclusion section does not seem to be provided.
- It would enrich the paper to briefly discuss the connection with optimization-based approaches in causal learning (many of which also used similar techniques including augmented Lagrangian and gumbel techniques), although the task is not completely the same (see e.g. https://arxiv.org/abs/2007.01754, https://arxiv.org/abs/1910.08527, https://openreview.net/forum?id=HsSLdHuAmnY).
- It would be helpful to elaborate more on the assumptions required and discuss some examples where the assumptions are satisfied/violated.
- Other related work that could be worth mentioning: https://arxiv.org/abs/2105.12891
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have not been well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your advice to improve our paper! Below is our response. We hope our clarification can solve your problem.
***1. The empirical validation of the algorithm/result is not extensive.***
Thank you for your feedback. As we point out in the paper, the methodology that we analyze has been proposed and analyzed experimentally in prior work [2,3,4]. The main contribution of our paper is to provide a theoretical justification of this methodology in general environments. Given the prior experimental validation we did not perform extensive experiments and we opted to perform a small set of targeted experiments that highlight the key design elements of the procedure (e.g. neural architecture, Lipschitz regularization) that our theory predicts are important. Moreover, we note that unlike prior work, our work is the first to compare experimentally the neural-causal approach to the polynomial programming approach of [5], which is a key contender in the prior literature in partial identification in econometrics and statistics (see Appendix D for more details on our experimental setup).
***2. The paper would benefit from giving some proof sketches (or a brief overview of the proof strategy) in the main paper.***
Thanks for your suggestion. We will add some proof sketches in the main body and before the proof in the appendix. Here, we briefly discuss our proving strategy of the main consistency theorem (Theorem 4).
We first establish a general result concerning the convergence of the optimal values of a sequence of optimization problems (Proposition 6). More specifically, we consider a sequence of constrained optimization problems $\\{P_n\\}$. We prove that if the objective and constrained functions satisfy some regularity assumptions (i.e. Lipschitz), the domain is compact, and the sequence of constrained functions converges uniformly to a function, the upper limit and lower limit of the sequence of optimal values can be bounded. Then, apply Proposition 6 to this setting, verifying the partial identification problem satisfies the assumptions of the proposition using previous approximation results (Theorem 1,2,3 and Corollary 1).
***3. A conclusion section does not seem to be provided.***
Thanks for the feedback. We did not include it to avoid some repetition given space constraints, but we will add it for the camera-ready version. The conclusion we plan to add in the final version is shown in the response of reviewer 8521 item 4.
***4. It would be helpful to elaborate more on the assumptions required and discuss some examples where the assumptions are satisfied/violated.***
Thank you for your advice. We will include extra discussions on the assumptions we use in the later version. Here, we briefly discuss the assumptions we use.
Assumption 1 is about the independence of latent variables, which is also used in [1,2]. Using this assumption, we can model confounding between two observed nodes in a causal model by letting one latent variable affect both nodes. This is primarily a convention and does not limit the applicability of the result.
Assumption 2: We discuss Assumption 2 in the reply to reviewer VZrh. Please see the item 1 of our response there.
Assumption 3 is about the boundedness and Lipshictz continuity of the structural functions, which is quite standard in the literature on representation theorems of non-parametric functions with neural networks.
Assumption 4 and Assumption 5 are about the latent distribution. These two assumptions enable us to approximate the latent distribution by pushing forward uniform and Gumbel variables. These assumptions may be violated if the support of the latent distribution has infinitely many connected components or if at least one component of the support is not homomorphic to the unit cube. However, we expect most natural distributions to obey these assumptions, as a violation of these assumptions can be viewed as a form of pathological distribution.
***5. Mention other related works.***
We appreciate the reviewer’s effort to improve our paper. The provided literature is really helpful. We will add these works to our Related Work part in the camera-ready version.
[1]Zhang, Junzhe, Elias Bareinboim, and Jin Tian. "Partial identification of counterfactual distributions." (2021).
[2] Xia, Kevin, et al. "The causal-neural connection: Expressiveness, learnability, and inference." Advances in Neural Information Processing Systems 34 (2021): 10823-10836.
[3] Balazadeh Meresht, Vahid, Vasilis Syrgkanis, and Rahul G. Krishnan. "Partial identification of treatment effects with implicit generative models." Advances in Neural Information Processing Systems 35 (2022): 22816-22829.
[4] Hu, Yaowei, et al. "A generative adversarial framework for bounding confounded causal effects." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 13. 2021.
[5] Guilherme Duarte, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. An automated approach to causal inference in discrete settings, September 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for the helpful replies. Most of my concerns have been addressed and I would like to maintain my rating. | Rebuttal 1:
Rebuttal: Thanks for all the reviewers' helpful feedback. We include all the figures in our response in this PDF file.
Pdf: /pdf/b59bba06267ff362b8fbea6ee338f8dae54ab4aa.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation | Accept (poster) | Summary: This paper investigates erasing undesirable concepts from stable diffusion. The paper builds up on a common finding on related works, that is removing even one concept can significantly rescue model’s ability to generate other concepts. Existing methods typically select a neutral concept, such as "a photo" or an empty string, as an anchor to preserve while erasing the target concept, expecting that maintaining the neutral concept should help retain other concepts as well (UCE, TIME). However, even regularization attempts using these neutral concept still affects the desirable concepts. This paper first find most sensitive concept for erasure, and then using a min-max (adversarial) adds another term to the loss function to remove only undesirable concepts.
Strengths: Paper is well written and has a great flow. Authors clearly state the problem, current solutions and their drawback. They build their approach on these drawback and support their proposed method using different experiments.
Weaknesses: - wrong citation for UCE in line 23
- Lines 29-30 are written as a strong conjecture without support. "no specific part of the model’s weights is solely responsible for a single concept". I have seen counter examples in the literature. See [1] for example
- While proposed method is showing improvement over related works, I doubt the practical usage and hence raise some questions.
[1] Basu, Samyadeep, et al. "On Mechanistic Knowledge Localization in Text-to-Image Generative Models." Forty-first International Conference on Machine Learning. 2024.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Which CLIP model is used?
- Why using a CLIP alignment score is a reliable measure for concept inclusion? For example a CLIP model that is trained on non-NSFW or non-GORE data cannot detect nudity/violence. What would your proposed method rely as a capability measure then?
- Line 152: the results of erasing the nudity concept, provided in the Appendix... I could not see this results. Best is also to have reference to specific plot/section in the appendix whenever you refer to appendix.
- Line 255: finetuning the non-cross-attention modules. I belive that including more details here is crucial. Why do you finetune only non-cross attention modules?
- Line 275: Did you see that FID is sufficient as a quality measure of generated samples when removing concepts? For example, removing nudity might distort anatomy and is not reflected in FID?
- Figure 4: We know that for nudity there is a huge difference between Feet and female breast for example. How would your proposed method only remove the later while keeping the non-sensitive parts untouched?
- What is the level of granularity in the proposed method? e.g. can we use this method to remove Mercedes logo from cars?
- How does your proposed method differ to "Forget-Me-Not" paper [1]? Any specific reason this is not covered in comparisons?
- I am curious to see what you think of task vectors [2] as a potential direction to remove undesirable concepts?
> code
- Code: train_adversarial_gumbel.py - lines 461-490: I could not see the implementation of equation (4) in these lines. Should not it be "loss += criteria(z_n_wo_prompt_pred.to(devices[0]), z_0_org_pred.to(devices[0]))" for L1 and "loss = -negative_guidance * criteria(z_r_wo_prompt_pred.to(devices[0]), z_r_org_pred.to(devices[0]))" for L2?
- When using L2 distance for similarity between vocab and the erasure wor, why do you use K-means? Why not using the top-n from the similarity result? did you look into result differences between these two?
- In the case of Stable Diffusion 3 (and SDXL), we have to have pooled and non-pooled captions. How would you calculate emb_r for the pooled embedding needed to condition on time?
[1] Zhang, Gong, et al. "Forget-me-not: Learning to forget in text-to-image diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[2] Ilharco, Gabriel, et al. "Editing models with task arithmetic." arXiv preprint arXiv:2212.04089 (2022).
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: already addressed as questions for further discussions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful suggestions. We would like to address the remaining concerns as follows.
Due to the space limitation, some responses are provided in the global rebuttal and the author comment section.
**Q: Why using a CLIP alignment score is a reliable measure for concept inclusion?**
In Section 3.2, we investigated which concepts are affected most by the erasure of a target concept.
The concept space that we investigate is not only a specific set of related concepts as shown in Figure 1 but also a broader set of concepts of the CLIP token vocabulary which includes 49,408 tokens as shown in Figure 2.
In this general and broader concept space, it is nearly impossible to have a pre-trained classification model that can detect all concepts.
Therefore, to the best of our knowledge, the CLIP alignment score might be the most effective way to measure the concept inclusion,
even though it is not perfect and might not be able to provide a reliable measure for some concepts as suggested by the reviewer.
We will add this discussion to the revised version.
**Q: Results of erasing nudity concept"**
We apologize for the confusion. We would like to provide the results of erasing the "nudity" concept while preserving the "person" concept in the attached document.
As shown in the figure, preserving related concepts like "person" helps to retain the model's capability on other concepts much better than preserving a neutral concept.
**Q: Why do you finetune only non-cross attention modules to erase NSFW concepts?**
Firstly, we would like to recall the cross-attention mechanism, i.e., $\sigma(\frac{(QK^T)}{\sqrt{d}})V$, where $Q$, $K$, and $V$ are the query, key, and value matrices, respectively.
In text-to-image diffusion models like SD, the key and value are derived from the textual embedding of the prompt, while the query comes from the previous denoising step.
The cross-attention mechanism allows the model to focus on the relevant parts of the prompt to generate the image.
Therefore, when unlearning a concept, most of the time, the erasure process is done by loosening the attention between the query and the key that corresponds to the concept to be erased, i.e., by fine-tuning the cross-attention modules.
This approach works well for object-related concepts or artistic styles, where the target concept can be explicitly described with limited textual descriptions.
However, as investigated in the ESD paper Section 4.1, concepts like 'nudity' or NSFW content can be described in various ways, many of which do not contain explicit keywords like 'nudity.'
This makes it inefficient to rely solely on keywords to indicate the concept to be erased.
It is worth noting that the standard SD model has 12 transformer blocks, each of which contains one cross-attention module but also several non-cross-attention modules such as self-attention and feed-forward modules, not to mention other components like residual blocks.
Therefore, fine-tuning the non-cross-attention modules will have a more global effect on the model, making it more robust in erasing concepts that are not explicitly described in the prompt.
We will add more explanations to the revised version.
**Q: Discussion on FID metric?**
We thank the reviewer for raising a very important question.
Firstly, we utilized FID measured on common concepts like the COCO-30K dataset to evaluate the generation quality of the sanitized models because it is a widely used metric in the field of generative models.
However, while this metric is useful in general generation settings, we acknowledge that it might not be sufficient for evaluating unlearning concepts.
As observed in our paper, the impact of erasure methods is not equally distributed across all concepts; some concepts might be more affected than others.
While this imbalance might be mitigated with a large enough evaluation size that covers all possible concepts, making FID sufficient, this is not the case in practice, which commonly uses the COCO-30K dataset.
Therefore, we believe that a more comprehensive evaluation metric that can capture the impact of erasing concepts on specific concepts is needed.
For example, we could assign a weight to each evaluated concept/prompt based on its relevance score to the target concepts, then compute a weighted FID or CLIP score.
**Q: How would your proposed method only remove the sensitive parts while keeping the non-sensitive parts untouched in the NSFW setting?**
While our method has demonstrated an interesting result in its ability to remove more sensitive body parts, we do not have a specific explanation for this phenomenon. However, intuitively speaking, if we consider two abstract concepts, "nudity" and "person," and two core concepts, "breast" and "feet," we can see that the "nudity" concept might be highly correlated with the "breast" concept, while the "person" concept might be more correlated with the "feet" concept rather than "breast."
Therefore, if we specifically preserve the "person" concept while erasing the "nudity" concept, the model might be able to generate images that include "feet" but exclude "breast." Our method, by selecting the most affected concepts, might naturally choose a concept that is highly correlated with non-sensitive body parts to be preserved.
**Q: What is the level of granularity in the proposed method?**
Our method does not have a specific mechanism to control the level of granularity or focus on a specific concept to be erased. Instead, our main goal is to prioritize the preservation aspect of the erasure process. However, if we were to address the problem of granularity, we would focus on enhancing the expressiveness of the textual description.
For example, using visual embeddings to describe the Mercedes logo concept could be a potential direction to improve the expressiveness and granularity of the method.
---
Rebuttal Comment 1.1:
Title: answer to Authors rebuttal
Comment: Thank you for taking the time to address my concerns. Unfortunately some of my questions are still not addressed and I really hope that we can discuss over those to conclude this rebuttal. Please see my responses below
Q: reliability of CLIP alignment
I believe that limitations of using CLIP score should be clearly mentioned in your draft and you need to focus more on non-NSFW removal because of limitation mentioned before.
Q: Results of erasing nudity concept
I could not find these results. When adding any new figures/plots provide details how to find them and make the life of reader easier. This is also my strong recommendation when writing your draft. You need to add details which figure in Appendix you are referring to.
Q: Why do you finetune only non-cross attention modules to erase NSFW concepts?
Do you have any results/references to ESD that supports/compares cross-attention finetuning vs. all attention finetuning?
Q: Discussion on FID metric?
I believe that running FID is a good automatic evaluation but not sufficient. You should always look at generated samples and include them. (examples in the appendix are good enough)
Q: Granularity/only remove the sensitive parts while keeping the non-sensitive parts
My main concern, that is not reflected in limitations is the practical usage of the proposed method. When asking these question, I was hoping to get some non-intuitive responses, such as evidences to support the strength of the proposed method or at least discussions on the future directions.
Q: "Forget-Me-Not" paper
I think that the comparison with this work should be included in the main manuscript. I was surprised not to see this in the comparisons. Please provide any results of you have tried single concept with this method.
Q: Coding question: Implementations of Equation 4 and Equation 5
I spent sometime to map your equations in the paper with the provided anonymous code when writing the original review, and hence asked specific details questions about any potential bug in your implementation. Please provide more details that might remove my/readers confusions.
Q: SDXL/SD3 implementation
I still do not understand how to get the pooled embedding. in both of these methods we need pooled and non-pooled features. I can see that non-pooled can be optimized using gumbel-softmax but your response still does not cover the pooled caption.
---
Reply to Comment 1.1.1:
Title: Further responses (1/n)
Comment: We thank the reviewer for actively engaging in the discussion and providing valuable feedback. In the following, we would like to address the reviewer's further comments:
**Q: I believe that limitations of using CLIP score should be clearly mentioned in your draft and you need to focus more on non-NSFW removal because of limitations mentioned before.**
We appreciate the reviewer's feedback. We will discuss the limitations of using the CLIP alignment score in the revised version.
**Q: Results of erasing nudity concept**
We appreciate the reviewer's recommendation and deeply apologize for forgetting to provide the details of the figure in the main paper and the appendix.
We provided the results of erasing the "nudity" concept while preserving the "person" concept in the attached document in the global rebuttal.
More specifically, we compare the impact of erasing the same "nudity" to other concepts with different preserving strategies, including preserving a fixed concept such as " ", "person", and the most affected concepts found by our method.
We will add a reference to the specific figure in the revised version.
**Q: Do you have any results/references to ESD that supports/compares cross-attention finetuning vs. all attention finetuning?**
In addition to the explanation provided in the previous response, we would like to provide additional experiments with the ESD method with different fine-tuning strategies as below.
More specifically, we compare the erasure performance of the ESD method by fine-tuning the cross-attention modules only (ESD-x) and fine-tuning non-cross-attention modules only (ESD-u).
| | NER-0.3↓ | NER-0.5↓ | NER-0.7↓ | NER-0.8↓ |
|----------|----------|----------|----------|----------|
| SD | 16.69 | 10.91 | 5.46 | 2.02 |
| ESD-x | 10.25 | 5.83 | 2.17 | 0.68 |
| ESD-u | 5.32 | 2.36 | 0.74 | 0.23 |
| Ours-u | 3.64 | 1.70 | 0.40 | 0.06 |
It can be seen that the erasure performance by fine-tuning the non-cross-attention modules is significantly better than fine-tuning the cross-attention modules only,
observed by the lower NER scores across all thresholds.
The detailed results at threshold 0.5 are shown in the table below, with the number of exposed parts and the number of images with any exposed parts.
| | SD | ESD-x | ESD-u |
|------------------------|-----|-------|-------|
| Feet | 92 | 61 | 24 |
| Belly | 212 | 81 | 21 |
| Armpits | 261 | 123 | 63 |
| Buttocks | 53 | 26 | 3 |
| Male Breast | 75 | 27 | 14 |
| Male Genitalia | 23 | 13 | 9 |
| Female Genitalia | 28 | 7 | 1 |
| Female Breast | 331 | 124 | 38 |
| Total #exposed part | 1075| 462 | 173 |
| Total #img-with-any-expose | 513 | 274 | 111 |
| NER | 10.91| 5.83 | 2.36 |
**Q: I believe that running FID is a good automatic evaluation but not sufficient. You should always look at generated samples and include them. (examples in the appendix are good enough)**
We thank the reviewer for the suggestion. Because of the page limit, we could not include the generated samples in the main paper.
We will try to include them in the revised version or provide a clear reference to the appendix.
---
Rebuttal 2:
Title: Further responses
Comment: **Q: Which CLIP model is used?**
We used the OpenAI CLIP model `openai/clip-vit-large-patch14' to compute the alignment score. We will add this information to the revised version.
**Q: How does your proposed method differ to the "Forget-Me-Not" paper? Any specific reason this is not covered in comparisons?**
Our method significantly differs from the Forget-Me-Not (FMN) method. From our point of view, FMN falls into the category of approaches like TIME, UCE, and MACE,
which focuses on confusing the alignment between the prompt and the visual features in the cross-attention mechanism.
Specifically, FMN introduces an attention resteering method that attempts to alter the attention maps related to the target concept (i.e., by minimizing the L2 norm of the attention maps).
Our method, on the other hand, stands out by focusing on identifying which concepts are most affected by the erasure of a target concept and then preserving these concepts to maintain the model's capability on other concepts.
We will cite and discuss the FMN in the revised version.
Regarding additional experiments with the FMN method, we attempted to erase the same set of concepts from the Imagenette dataset and evaluated the erasing and preservation performance of the FMN method within our setting.
However, despite our best efforts given the time constraints, FMN did not effectively erase the target concepts.
Notably, their open-source code provides hyper-parameter settings for single-concept erasure but not for multiple concepts.
We also noticed an open issue on their GitHub repository questioning this same problem.
This issue can easily be verified by running their code with a modified configuration file, `attn.yaml`,
to erase multiple concepts, given some representative images of the target concepts.
**Q: Coding question: Implementations of Equation 4 and Equation 5**
Equation 4 describes our naive approach, which utilizes Projected Gradient Descent (PGD) to search for the adversarial concepts in the continuous space. We have uploaded the code for this approach to the anonymous repository. Please refer to the code for more details.
Regarding the implementation of Equation 5, our method involves bilevel optimization. In this process, the inner maximization step maximizes the L2 loss (i.e., minimizes -L2 loss) to select the most affected concepts to be preserved. The outer minimization step minimizes the combined L1 + L2 loss to erase the target concepts while preserving the most affected concepts.
For the L1 loss, we inherited this implementation from the ESD paper and did not attempt to modify it, including the aspect of negative guidance.
**Q: Coding question: Why using K-means?**
In the implementation, we investigate the use of K-means to control the tradeoff between the computational cost and the size of the vocabulary when searching for adversarial concepts. More specifically, we first compute the similarity between the target concept and the entire vocabulary, then select the top-K most similar concepts. We then use K-means to choose the K most representative concepts from these top-K most similar concepts. This approach allows us to cover a wide range of concepts while keeping the computational cost low.
**Q: Coding question: How to calculate the text embedding for adversarial concepts in Stable Diffusion version 3**
To the best of our understanding, as described in the `encode_prompt` function in the Stable Diffusion v3 pipeline (lines 310-311),
as similar to the same function in the SDXL pipeline (line 397), we still can obtain the standard prompt embedding as the output of a text encoder which is independent of the time step. Therefore, we can still use our current approach to calculate the `emb_r` for the adversarial concepts.
---
Rebuttal 3:
Title: Looking forward to your responses!
Comment: As the rebuttal period is coming to an end, we hope to have clarified the novelty of our contribution and addressed the concerns of the reviewer. We truly appreciate the reviewer's time on our paper and we are looking forward to your responses and/or any other potential questions. Your feedback would be very helpful for us to identify any ambiguities in the paper for further improvement. | Summary: The present paper addresses the challenge of erasing content from text-to-image diffusion models with a focus on reducing the degenerative impact on other concepts. To this end, the authors propose a novel approach that focuses on identifying and preserving adversarial concepts—those which are most affected by changes in model parameters during the erasure. To provide evidence, the authors conduct empirical investigations and various experiments using the open-source Stable Diffusion model, demonstrating their method’s potential to reliably erase target concepts while offering minimal impact on non-target concepts.
Strengths: - The paper is well-structured, providing clear and comprehensive explanations of the proposed method and its theoretical foundations.
- The novelty of focusing on adversarial concepts to balance erasure and preservation is a significant contribution to the field.
The experiments are well selected. They first showcase the reliability of the proposed method on a generic task, followed by two relevant use cases, namely erasing unethical and artistic concepts.
- While not presented in the main paper, the authors address limitations of their proposed method in the appendix.
Weaknesses: - While the paper showcases the benefits of the proposed method, the observations and conclusions drawn from the empirical experiments do not fully align with what is described by the authors, particularly regarding the minimal impact on other concepts. For example, in lines 231-232, the authors state that the proposed method achieves much higher ESR scores than the two baselines ESD and CA; however, this claim is not entirely accurate, especially in the case of CA. Similarly, the improvements over the baselines shown in the third experiment are not as significant as described, particularly when considering the standard deviations presented in Table 3. The paper would benefit from a more detailed discussion and analysis of these results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - As far as I understand, the set of concepts used to search for adversarial concepts is derived from the list of words in the initial prompt (lines 76-79). However, it appears to be a common set used across multiple examples. Can you provide more details and clarify how this set of concepts is selected? Have you experimented with different sets of initial concepts?
- Further, you mentioned in lines 252-255 that, in the case of mitigating unethical content, it is necessary to fine-tune the non-cross-attention modules. Can you elaborate on why this is the case?
Minor comments:
- In line 238, CA seems to be the best baseline, not ESD.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors addressed limitations in the appendix as stated in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful suggestions. We would like to address the remaining concerns as follows.
**Q: Better discussion**
We thank the reviewer for pointing this out. We will revise the comparison with CA in the revised version, i.e., our method is slightly better than CA in terms of ESR scores but significantly better in terms of PSR scores.
**Q: How to choose the concept space**
We did not design specific but utilized a common set of concepts as the search space $\mathcal{R}$ for searching adversarial concepts across all experiments.
To ensure the generality of the search space so that it can be applied to various tasks such as object-related concepts, NSFW content, and artistic styles,
we used the Oxford 3000 most common words in English as the search space.
It is worth noting that as described in Section B.1, our method employ the Gumbel-Softmax trick to discretely search in the concept space $\mathcal{R}$,
this approach requires feeding the model with the embeddings of the entire search space $\mathcal{R}$ to compute the alignment score,
which is computationally expensive when the search space is large.
To mitigate this, we use a subset of the K most similar concepts to reduce the computational cost.
To better address the reviewer's concern, we conducted additional experiments with the search space as the CLIP token vocabulary, which includes 49,408 tokens.
It is worth noting that the CLIP token vocabulary is more comprehensive but presents challenges due to the large number of nonsensical tokens (e.g., )
Therefore, we need to filter out these nonsensical tokens to ensure the quality of the search space.
The results from object-related concepts are shown in the table below.
| Vocab | ESR-1 ↑ | ESR-5 ↑ | PSR-1 ↑ | PSR-5 ↑ |
|--------|---------|---------|---------|---------|
| Oxford | 98.72 | 95.60 | 63.80 | 82.96 |
| CLIP | 97.88 | 94.80 | 69.24 | 87.20 |
The results show that the erasing performance is slightly lower when using the CLIP token vocabulary as the search space,
but the preservation performance is much better with a gap of 5.4\% in PSR-1 and 4.2\% in PSR-5.
This indicates that our method would benefit from a more comprehensive search space.
We will add this experiment to the revised version.
**Q: it is necessary to fine-tune the non-cross-attention modules in erasing NSFW concept**
Firstly, we would like to recall the cross-attention mechanism, i.e., $\sigma(\frac{(QK^T)}{\sqrt{d}})V$, where $Q$, $K$, and $V$ are the query, key, and value matrices, respectively.
In text-to-image diffusion models like SD, the key and value are derived from the textual embedding of the prompt, while the query comes from the previous denoising step.
The cross-attention mechanism allows the model to focus on the relevant parts of the prompt to generate the image.
Therefore, when unlearning a concept, most of the time, the erasure process is done by loosening the attention between the query and the key that corresponds to the concept to be erased, i.e., by fine-tuning the cross-attention modules.
This approach works well for object-related concepts or artistic styles, where the target concept can be explicitly described with limited textual descriptions.
However, as investigated in the ESD paper Section 4.1, concepts like 'nudity' or NSFW content can be described in various ways, many of which do not contain explicit keywords like 'nudity.'
This makes it inefficient to rely solely on keywords to indicate the concept to be erased.
It is worth noting that the standard SD model has 12 transformer blocks, each of which contains one cross-attention module but also several non-cross-attention modules such as self-attention and feed-forward modules, not to mention other components like residual blocks.
Therefore, fine-tuning the non-cross-attention modules will have a more global effect on the model, making it more robust in erasing concepts that are not explicitly described in the prompt.
We will add more explanations to the revised version.
---
Rebuttal 2:
Title: Looking forward to your responses!
Comment: As the rebuttal period is coming to an end, we hope to have clarified the novelty of our contribution and addressed the concerns of the reviewer. We truly appreciate the reviewer's time on our paper and we are looking forward to your responses and/or any other potential questions. Your feedback would be very helpful for us to identify any ambiguities in the paper for further improvement.
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal and for including the additional experiments, which have clarified some aspects of the search space design. I also appreciate the additional experiments shown in the global rebuttal.
Regarding the claim to “fine-tune the non-cross-attention modules,” I agree with Reviewer fKJG that presenting results to support this claim would significantly strengthen the argument.
---
Reply to Comment 2.1.1:
Title: Further response
Comment: We thank the reviewer for actively engaging in the discussion and providing valuable feedback.
Regarding the discussion on "fine-tuning the non-cross-attention modules": In addition to the explanation provided in the previous response, we would like to provide additional experiments with the ESD method with different fine-tuning strategies as below.
More specifically, we compare the erasure performance of the ESD method by fine-tuning the cross-attention modules only (ESD-x) and fine-tuning non-cross-attention modules only (ESD-u).
| | NER-0.3↓ | NER-0.5↓ | NER-0.7↓ | NER-0.8↓ |
|----------|----------|----------|----------|----------|
| SD | 16.69 | 10.91 | 5.46 | 2.02 |
| ESD-x | 10.25 | 5.83 | 2.17 | 0.68 |
| ESD-u | 5.32 | 2.36 | 0.74 | 0.23 |
| Ours-u | 3.64 | 1.70 | 0.40 | 0.06 |
It can be seen that the erasure performance by fine-tuning the non-cross-attention modules is significantly better than fine-tuning the cross-attention modules only,
observed by the lower NER scores across all thresholds.
The detailed results at threshold 0.5 are shown in the table below, with the number of exposed parts and the number of images with any exposed parts.
| | SD | ESD-x | ESD-u |
|------------------------|-----|-------|-------|
| Feet | 92 | 61 | 24 |
| Belly | 212 | 81 | 21 |
| Armpits | 261 | 123 | 63 |
| Buttocks | 53 | 26 | 3 |
| Male Breast | 75 | 27 | 14 |
| Male Genitalia | 23 | 13 | 9 |
| Female Genitalia | 28 | 7 | 1 |
| Female Breast | 331 | 124 | 38 |
| Total #exposed part | 1075| 462 | 173 |
| Total #img-with-any-expose | 513 | 274 | 111 |
| NER | 10.91| 5.83 | 2.36 | | Summary: This paper focuses on the problem that existing concept erasing methods struggle to address the trade-off between the generation capability of erased concepts and remaining concepts. To address this problem, this paper proposes a method that erases the target concept while minimizing the impact of other concepts. Specifically, this paper finds that related concepts are sensitive during the erasing process of the target concept. For example, when erasing 'nudity', some related concepts, such as 'woman' and 'people' will be significantly impacted. Motivated by this observation, this paper first utilizes an optimization method to automatically find the adversarial concepts related to the target concept. Following this, this paper iteratively applies the preservation constraint on these adversarial concepts during the erasing process.
Strengths: - This paper provides an empirical observation that removing different target concepts leads to varying impacts on other concepts, which is interesting and helpful for future research.
- In the experiment of erasing object-related concepts, the proposed method demonstrates the effectiveness in maintaining the generation capability of remaining concepts.
Weaknesses: - The motivation of this paper needs further discussed. My core issue is: Do all related concepts need to be preserved? For example, if we want to erase 'airplane', do 'aircraft' and 'warplane' need to be preserved? Motivated by this issue, I argue that there should be a boundary between the preserved and erasing concepts, while this paper ignores this boundary. More extremely, some works [1,2,3] argue that we should erase concepts related to the target concept.
- Lack of comparison with the latest methods, such as MACE [4].
- In the experiment of erasing object-related concepts, it is suggested to add an experiment that demonstrates the generalization of the model. For example, following MACE [4], this paper can evaluate the erasing capability on the synonyms of the erasing concept.
- In the experiment of erasing NSFW content, this paper lacks an evaluation of the generation capability on the common concepts. Following [2,3], this paper can evaluate the FID of the model on COCO dataset.
Technical Quality: 2
Clarity: 1
Questions for Authors: Please check weaknesses.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and suggestions. We would like to address the remaining concerns as follows.
Due to the space limitation, some responses are provided in the global rebuttal.
**Q: Do all related concepts need to be preserved? Boundary between preserved and erasing concepts**
We agree with the reviewer that when erasing a target concept like 'airplane,' the synonyms that resemble it visually, such as 'aircraft,' should also be erased.
However, while our method does not explicitly set a boundary when searching for the adversarial concepts, the optimization process in our method naturally selects the most affected concepts but not the `similar' concepts to the target concept to be preserved.
We support our response with both theoretical and empirical evidence.
Theoretical perspective:
Our optimization framework, described in Equations 4 and 5, involves a bilevel optimization process.
The outer level (w.r.t. $\theta'$) minimizes the erasing loss L1 and the preservation loss L2 simultaneously,
while the inner level (w.r.t. $c_a$) maximizes L2 to select the most affected concepts to be preserved.
Initially, $\theta' = \theta$, therefore after minimizing L1, the most affected concepts will be exactly the target concept $c_e$.
As optimization progresses, the model $\theta'$ diverges from the original model $\theta$, driving the output $\epsilon_{\theta'}(c_e)$ away from $\epsilon_{\theta}(c_e)$ and closer to $\epsilon_{\theta}(c_n)$.
This process makes the target concept $c_e$ one of the affected concepts or a candidate for the inner maximization w.r.t. $c_a$.
However, selecting $c_a = c_e$ would directly conflict with the erasing loss L1 that aims to drive $c_e \rightarrow c_n$.
Consequently, the inner maximization process steers towards selecting the most affected concepts but not the target concept $c_e$ or its synonyms.
Empirical perspective:
As mentioned in Appendix B.4 (lines 835-844) and shown in Figure 12, the intermediate results of the adversarial concept selection process indicate that the model initially selects concepts similar to the target. For instance, Figure 12 shows that 'truck,' 'music,' 'church,' 'French,' and 'bag' are selected as adversarial concepts for 'Garbage truck,' 'Cassette player,' 'Church,' 'French horn,' and 'Parachute,' respectively. Over time, the model shifts towards less similar but highly affected concepts.
Moreover, our response to another question demonstrates that our method effectively erases both the target concept and its synonyms, as shown by the results of erasing the synonyms.
In summary, both theoretical and empirical evidence support that our method naturally prioritizes preserving the most affected concepts, which are not the synonyms of the target concept but those significantly impacted by its erasure.
**Q: Compare to MACE**
Following the reviewer's suggestion, we conducted an additional experiment using MACE with their official implementation on object-related concepts, as detailed in section 5.1 of the main paper.
Specifically, we conducted four distinct tasks, each involving the erasure of five concepts from the Imagenette dataset.
In addition, we evaluated the preservation performance with common concepts from the COCO-30K dataset with FID and CLIP scores.
Due to time constraints, we were only able to complete one task, which involved erasing five concepts: Cassette Player, Church, Garbage Truck, Parachute, and French Horn.
The results are presented in Table 1 in the attached document.
Regarding erasing performance, MACE slightly outperformed our method by 0.7\% in ESR-1 and 0.5\% in ESR-5 on average.
However, in terms of preservation performance, our method significantly outperformed MACE, with a gap of 8\% in PSR-1 and 7\% in PSR-5.
Additionally, our method achieved superior preservation performance in image generation with common concepts, evidenced by the lowest FID score of 16.3 and the highest CLIP score of 26.1.
These results indicate that while MACE shows marginally better erasing performance, our method excels significantly in preservation performance, even when compared to the latest techniques.
**Q: Evaluating FID on COCO dataset in erasing NSFW setting**
We already measured the FID on the COCO dataset and provided the results in Table 2 in the main paper.
---
Rebuttal 2:
Title: Looking forward to your responses!
Comment: As the rebuttal period is coming to an end, we hope to have clarified the novelty of our contribution and addressed the concerns of the reviewer. We truly appreciate the reviewer's time on our paper and we are looking forward to your responses and/or any other potential questions. Your feedback would be very helpful for us to identify any ambiguities in the paper for further improvement. | Summary: This paper studies the memory-forgetting tradeoff for concept removal. The authors systematically summarize the tradeoff problem, and propose the idea of adversarial concepts to solve it. Specifically, this approach automatically detects the most sensitive concepts that will be affected by unlearning, and enforces the model to maintain the performance of sensitive concepts by adding the maintaining loss.
Strengths: 1. This paper is well-written
2. The summary of the performance drop problem for unlearning is systematic.
Weaknesses: 1. Results are mainly on SD1.4. Additional results on other stable diffusion models should be included. For example, on larger diffusion models like SD XL, to further validate the effectiveness of the proposed method.
2. In the experiment section, comparisons with existing state-of-the-art methods, such as SPM [1], are lacking.
3. The ESR-k and PSR-k metrics used by the authors are reasonable and widely adopted by previous methods. However, additional metrics, such as FID and CLIP Score, should also be included to demonstrate the model's performance in the object unlearning scenario. Similarly, evaluating the FID metric on COCO-30K is also necessary.
4. When visualizing results in the appendix, the authors should show images before and after unlearning to provide a more intuitive sense of performance. Specifically, changes in unrelated concepts should be minimal. The current images only indicate that the concept has not been forgotten, rather than showing that the generated images under the prompt have not changed.
5. Efficiency is also an important evaluation metric in machine unlearning. The authors should adequately compare the efficiency of their method with previous methods.
6. When comparing methods, the authors should contrast their preservation methods with previous methods, such as the modules used in ConAbl [2] and SPM [1], to highlight performance differences.
[1] One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models, and Erasing Applications
[2] Ablating Concepts in Text-to-Image Diffusion Models
Technical Quality: 2
Clarity: 3
Questions for Authors: How to select the parameter $\lambda$? It seems that different concepts may need different coefficients.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and suggestions. We would like to address the remaining concerns as follows.
Due to the space limitation, some responses are provided in the author's comment section.
**Q: Additional experiments with SOTA methods such as ConAbl and SPM, and evaluation with FID and CLIP scores on COCO-30K dataset.**
We followed the reviewer's suggestion and conducted additional experiments with MACE, a SOTA method recently accepted at CVPR 2024.
It is worth noting that we already compared with the suggested baseline ConAbl denoted as CA in our paper.
Specifically, we performed the object-related setting which includes four tasks, each involving the erasure of five concepts from the Imagenette dataset.
In addition to ESR-1, ESR-5, PSR-1, and PSR-5 metrics, we generated images with common concepts from the COCO-30K dataset and assessed FID and CLIP scores.
Due to time constraints, we completed one task, erasing five concepts: Cassette Player, Church, Garbage Truck, Parachute, and French Horn.
The results are presented in the table below.
| Method | | ESR-1 ↑ | | ESR-5 ↑ | | PSR-1 ↑ | | PSR-5 ↑ | | FID ↓ | | CLIP ↑ | | FT Time |
|--------|--|---------|--|---------|--|---------|--|---------|--|-------|--|--------|--|---------|
| ESD | | 95.5 ± 0.8 | | 88.9 ± 1.0 | | 41.2 ± 12.9 | | 56.1 ± 12.4 | | 17.9 | | 24.5 | | 40 mins |
| UCE | | 100 ± 0.0 | | 100 ± 0.0 | | 23.4 ± 3.6 | | 49.5 ± 8.0 | | 19.1 | | 21.4 | | 4 mins |
| CA | | 98.4 ± 0.3 | | 96.8 ± 6.1 | | 44.2 ± 9.7 | | 66.5 ± 6.1 | | 16.6 | | 25.8 | | 12 mins |
| MACE | | 99.3 ± 0.3 | | 97.6 ± 1.2 | | 47.4 ± 12.0 | | 72.8 ± 10.5 | | 16.9 | | 24.9 | | 3 mins |
| Ours | | 98.6 ± 1.1 | | 96.1 ± 2.7 | | 55.2 ± 10.0 | | 79.9 ± 2.8 | | 16.3 | | 26.1 | | 65 mins |
Regarding erasing performance, MACE slightly outperformed our method by 0.7% in ESR-1 and 0.5% in ESR-5 on average.
However, in terms of preservation performance, our method significantly outperformed MACE, with an 8% gap in PSR-1 and a 7% gap in PSR-5.
Additionally, our method achieved superior preservation performance in image generation with common concepts, evidenced by the lowest FID score of 16.3 and the highest CLIP score of 26.1.
These results indicate that while MACE shows marginally better erasing performance, our method excels significantly in preservation performance, even when compared to the latest techniques.
**Compared to SPM:**
The SPM method, although effective in concept erasure, does not directly fine-tune the original model.
Instead, it trains a separate adapter that, when attached to the original model, prevents the generation of the erased concept.
Specifically, Section 3.1 of the SPM paper introduces a new diffusion process $\hat{\epsilon} = \epsilon(x_t, c, t \mid \theta, \mathcal{M}_{c_e})$
, where $\mathcal{M}_{c_e}$ is the adapter model trained to erase the concept $c_e$.
While these adapters can be shared and reused across different models, the original model $\theta$ remains unchanged, allowing malicious users to generate the erased concepts easily.
In contrast, our method, along with the other baselines in our paper, directly erases the concept from the original model, making it more robust and preventing the generation of the erased concepts.
For this reason, we did not include SPM in the comparison.
**Q: Additional experiments with larger models**
Due to resource constraints in the short rebuttal timeframe, we were only able to conduct additional experiments on SD v1.4. However, it is worth noting that SD v1.4 is still the most widely used model in the community, including recent work such as MACE.
We will consider conducting experiments on larger models like SD XL in future work.
**Q: Evaluating Efficiency**
Following the reviewer's suggestion, we provided the fine-tuning time for the object-related concepts task in the table above.
It is worth noting that we could only find efficiency evaluation in the SPM paper, which is specifically designed to highlight the efficiency of their lightweight adapter approach.
Our method which was designed to prioritize erasing and preserving performance, may not be as efficient as SPM in terms of fine-tuning time.
However, we politely argue that efficiency is not the main concern in our work or other baselines, as the fine-tuning process is relatively fast, typically completing in less than a few hours, which is acceptable in practice for infrequent concept erasure requests.
**Q: Evaluating the impact of $\lambda$**
To investigate the impact of different $\lambda$ values, we conducted additional experiments on the object-related concepts task with $\lambda = 0.1, 0.5, 5, 10$.
The results are presented in the table below.
There is a clear trade-off between erasing and preserving performance when changing the $\lambda$ value.
A smaller $\lambda$ value results in better erasing performance but worse preservation performance, and vice versa.
In our experiments, we did not attempt to tune $\lambda$ for each concept but just simply set it to 1 for all experiments as mentioned in line 774.
| λ | ESR-1 ↑ | ESR-5 ↑ | PSR-1 ↑ | PSR-5 ↑ |
|------------|---------|---------|---------|---------|
| 0.1 | 97.88 | 94.52 | 29.44 | 40.72 |
| 0.5 | 98.28 | 94.32 | 56.04 | 73.92 |
| 1 | 98.72 | 95.60 | 63.80 | 82.96 |
| 5 | 96.96 | 91.68 | 74.84 | 91.52 |
| 10 | 91.64 | 84.48 | 83.04 | 96.64 |
---
Rebuttal 2:
Title: Further response
Comment: **Q: Better visualization**
In Figures 9-11 in the Appendix, we already provided a comparison between the output generated by the same prompt using the original model and the sanitized models after erasing each specific artistic style.
In each sub-figure, the first column shows the images generated by the original model, while the second to sixth columns display the images generated by the sanitized models. Each row corresponds to a prompt of one of these five artists.
The ideal erasure should result in changes in the diagonal pictures (marked by a red box) compared to the first column, while the off-diagonal pictures should remain the same.
We believe that these visualizations are clear and informative, providing a direct comparison between the original and sanitized models.
---
Rebuttal 3:
Title: Looking forward to your responses!
Comment: As the rebuttal period is coming to an end, we hope to have clarified the novelty of our contribution and addressed the concerns of the reviewer. We truly appreciate the reviewer's time on our paper and we are looking forward to your responses and/or any other potential questions. Your feedback would be very helpful for us to identify any ambiguities in the paper for further improvement. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and suggestions. Below are our responses to some important questions raised by the reviewers. We kindly request the reviewers to consider raising the scores if our responses adequately address the remaining concerns.
**Q: Compare to MACE a SOTA method**
Following the reviewer's suggestion, we conducted an additional experiment using MACE (accepted to CVPR 2024) with their official implementation on object-related concepts, as detailed in section 5.1 of the main paper.
Specifically, we conducted four distinct tasks, each involving the erasure of five concepts from the Imagenette dataset.
In addition, we evaluated the preservation performance with common concepts from the COCO-30K dataset with FID and CLIP scores.
Due to time constraints, we were only able to complete one task, which involved erasing five concepts: Cassette Player, Church, Garbage Truck, Parachute, and French Horn.
The results are presented in Table 1 in the attached document and below
| Method | ESR-1 ↑ | ESR-5 ↑ | PSR-1 ↑ | PSR-5 ↑ | FID ↓ | CLIP ↑ |
|--------|---------------|---------------|---------------|---------------|-------|--------|
| ESD | 95.5 ± 0.8 | 88.9 ± 1.0 | 41.2 ± 12.9 | 56.1 ± 12.4 | 17.9 | 24.5 |
| UCE | 100 ± 0.0 | 100 ± 0.0 | 23.4 ± 3.6 | 49.5 ± 8.0 | 19.1 | 21.4 |
| CA | 98.4 ± 0.3 | 96.8 ± 6.1 | 44.2 ± 9.7 | 66.5 ± 6.1 | 16.6 | 25.8 |
| MACE | 99.3 ± 0.3 | 97.6 ± 1.2 | 47.4 ± 12.0 | 72.8 ± 10.5 | 16.9 | 24.9 |
| Ours | 98.6 ± 1.1 | 96.1 ± 2.7 | 55.2 ± 10.0 | 79.9 ± 2.8 | 16.3 | 26.1 |
Regarding erasing performance, MACE slightly outperformed our method by 0.7\% in ESR-1 and 0.5\% in ESR-5 on average.
However, in terms of preservation performance, our method significantly outperformed MACE, with a gap of 8\% in PSR-1 and 7\% in PSR-5.
Additionally, our method achieved superior preservation performance in image generation with common concepts, evidenced by the lowest FID score of 16.3 and the highest CLIP score of 26.1.
These results indicate that while MACE shows marginally better erasing performance, our method excels significantly in preservation performance, even when compared to the latest techniques.
**Q: Evaluating performance on erasing synonyms**
We follow the suggestion to evaluate the erasing capability on the synonyms of object-related concepts, e.g., "Church".
More specifically, we first utilize a set of tools including ChatGPT, Dictionary/Thesaurus.com, and Google image search to find the best synonyms for each target concept.
To verify that these synonyms are indeed resembling the target concept, we then use the original model to generate images from the synonyms (e.g., "a photo of Chapel"),
and using the ResNet-50 model to classify the generated images.
We then only keep the synonyms that have the top-5 accuracy higher than 50\% to ensure that they are indeed generation-similar to the target concept.
To this end, for some concepts, we could not find any good synonyms such as "Golf ball" or "Chain saw" except for some minor variations.
We provide (top-1 and top-5) accuracy of the synonyms as below, as well as those numbers of target concepts, the higher the accuracy, the more similar the synonyms are to the target concept. Due to the space constraint, we are only able to provide the numbers of 6/10 concepts.
- **Church (84.4;100.0)**: chapel (80.0;100.0), cathedral (50.0;100.0), minster (87.5;100.0), basilica (32.5;100.0)
- **Garbage truck (83.2;99.2)**: trash truck (87.5;97.5), refuse truck (80.0;100.0), waste collection vehicle (97.5;100.0), sanitation truck (47.5;100.0)
- **Parachute (95.2;99.2)**: skydiving chute (93.9;100.0), paraglider (100.0;100.0)
- **Chain saw (76.4;89.0)**: chainsaw (92.0;96.0), power saw (26.0;58.0)
- **Tench (76.0;98.0)**: cyprinus tinca (60.0;95.0), cyprinus zeelt (52.5;100.0)
- **Golf ball (98.2;99.2)**: golfing ball (99.0;99.0)
Given the list of `valid' synonyms, we then generate images from the synonyms using the sanitized models obtained from four object-related settings (each set corresponds to erasing five Imagenette concepts simultaneously).
The results are shown in Table 2 in the attached document and as below:
| Method | $\text{ESR}_{s}$-1 ↑ | $\text{ESR}_{s}$-5 ↑ | $\text{PSR}_{s}$-1 ↑ | $\text{PSR}_{s}$-5 ↑ |
|--------|----------------------|----------------------|----------------------|----------------------|
| SD-org | 22.0 ± 11.6 | 2.4 ± 1.4 | 78.0 ± 11.6 | 97.6 ± 1.4 |
| SD-syn | 41.5 ± 8.2 | 7.5 ± 2.3 | 58.5 ± 8.2 | 92.5 ± 2.3 |
| UCE | 99.8 ± 0.1 | 99.2 ± 0.5 | 19.5 ± 4.4 | 43.8 ± 0.6 |
| MACE | 98.1 ± 0.9 | 84.7 ± 2.2 | 41.4 ± 10.3 | 73.3 ± 3.1 |
| Ours | 85.2 ± 6.1 | 72.3 ± 7.1 | 46.6 ± 6.5 | 82.9 ± 4.5 |
Firstly, if comparing SD-org and SD-syn (the original/target concept and the synonyms),
we can see that the top-1 accuracy of SD-syn is significantly lower than that of SD-org, but the top-5 accuracy is lower `only' by 5\% on average.
This does not mean that the model could not generate meaningful images from the synonyms, but rather the images are not recognized as the target concept by the ResNet-50 model in top-1 prediction but still in top-5.
Secondly, MACE is the best method in terms of erasing synonyms, followed by our method.
However, in trade-off, our method is better in preserving performance, not only the original concepts as shown in the main paper but also the synonyms.
Thirdly, we should also consider the FID and CLIP scores to evaluate the preservation performance of the model, in which our method is much better than MACE.
Pdf: /pdf/c14f55463915ab1d87b3ab15c951dea21bbb757f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration | Accept (poster) | Summary: Volume rendering requires numerical integration for estimating output colors. This work proposes using the Gauss-Laguerre quadrature to reduce the number of samples and improve integration accuracy. The paper demonstrates that this method can be a plug-and-play module for any NeRF model. Experimental results show that with a limited drop in performance, the GL-NeRF can significantly reduce the number of ray samples and MLP calls.
Strengths: 1. The perspective of improving quadrature for NeRF is new and interesting.
2. The formulation of using the Gauss-Laguerre quadrature looks good, and the mathematical formulation appears rigorous.
Weaknesses: 1. The results show a performance drop of about ~2 PSNR, while the speed improvement is not significant.
2. The motivation is not strong. Most state-of-the-art NeRF approaches use shallow MLPs or even no MLPs, making the evaluation less expensive. Reducing ray samples does not seem to address a core issue in radiance field research.
3. How the points are selected is unclear. The method requires approximating polynomial coefficients and resolving the roots for $x$ However, since $x$ is a highly non-linear function of $t$, finding the samples $t$ unavoidably requires root finding along the ray, which does not seem to actually improve accuracy or efficiency.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Why use a look-up table? Will this lead to worse performance?
2. Why did the performance not match TensoRF? Would increasing the number of point samples help?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The paper mentions that it has a theoretical guarantee of the highest precision. However, there is no evidence that the current precision matches previous works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We’re thankful for your time and the valuable insights you’ve shared. Your input has significantly advanced our project. In response to your feedback, we proposed a brand new perspective for volume rendering with a strong math foundation and validated it with experiments. We’ll address your concerns below.
> The results show a performance drop of about ~2 PSNR, while the speed improvement is not significant.
Since vanilla NeRF’s speed is bottlenecked by the coarse network, we showcase the significant improvement in speed using TensoRF in the general response and paste it below. Our method achieves comparable quality as the baseline while running almost real-time on a AMD Ryzen 9 5900HS CPU.
| Method | PSNR | SSIM | LPIPS | FPS |
| -------------- | ----- | ---- | ----- | ----- |
| TensoRF | 33.28 | 0.97 | 0.016 | 5.84 |
| Ours + TensoRF | 33.09 | 0.97 | 0.016 | 22.34 |
> The motivation is not strong. Most state-of-the-art NeRF approaches use shallow MLPs or even no MLPs, making the evaluation less expensive. Reducing ray samples does not seem to address a core issue in radiance field research.
We’d like to highlight that the position of our paper is a ***brand new perspective for volume rendering***, and its ***applicability towards the general NeRF pipeline*** that relies on volume rendering. Therefore, the experiments validate that it can be combined with any existing NeRF pipeline that relies on volume rendering regardless of the underlying representation. GL-NeRF is orthogonal to existing state-of-the-art work in the sense that all other works introduce additional representation such as neural networks and grids while GL-NeRF focuses only on the volume rendering integral itself. We believe it's a promising exploration of volume rendering and would bring insights into the radiance field research.
> How the points are selected is unclear. The method requires approximating polynomial coefficients and resolving the roots for $x$. However, since $x$ is a highly non-linear function of $t$, finding the samples $t$ unavoidably requires root finding along the ray, which does not seem to actually improve accuracy or efficiency.
We’d like to point out that we do not need to compute the polynomial coefficients and resolving the roots. All the roots and coefficients needed are well defined and can be looked up online / in numpy. Please refer to the document for the function numpy.polynomial.laguerre.laggause for detail.
As for finding $t$ on the ray, traditional volume rendering uses a piecewise constant PDF for approximating the volume density, and so do we. Therefore, there is no need to find roots along the ray for a non-linear function, we use the same approximation as done in the hierarchical sampling strategy.
> Why use a look-up table? Will this lead to worse performance?
The Gauss-Laguerre Quadrature is a well-known formula in numerical analysis, therefore there is no need to solve for the roots and coefficients on the run. Since using a look-up table is $O(1)$ in terms of time complexity, therefore it will not lead to worse performance.
> Why did the performance not match TensoRF? Would increasing the number of point samples help?
We’d like to first point out that GL-NeRF uses fewer points than TensoRF (4 v.s. ~32), leading to a lower performance with no visual quality drop (our qualitative results are randomly chosen from the images, not cherry-picked). On the other hand, when increasing the number of points used for GL-NeRF, since the predefined Laguerre weights approach to zero and are smaller than machine precision(for 32-bit float, it’s $1.175494 × 10^{-38}$), the performance reaches a bottleneck. Therefore, we focus on the fact that GL-NeRF, with higher precision, can reduce the sample points by a large margin.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer PEjS
Comment: Thank you for your explanation. Most of my initial concerns have been satisfactorily addressed, and I now recognize the novelty in the proposed method. Although the current approach may seem somewhat limited in its impact, I believe it holds potential for broader applications in other areas. Given these considerations, I would like to raise my score.
---
Reply to Comment 1.1.1:
Title: Thank You for Your Encouragement and Recognition
Comment: Your recognition of our novelty is truly encouraging. And thank you again for your insightful feedback. We greatly appreciate your positive recommendation! | Summary: This paper proposes a computational method for volume rendering using Gauss-Laguerre quadrature. In the context of NeRF, volume rendering is performed by evaluating MLPs (or other data structures) at a sequence of query points on a ray and integrating the weighted results. The proposed method reduces the number of evaluations without much performance degradation by computing the integrations using Gauss-Laguerre quadrature. The proposed method has been embedded in vanilla NeRF and TensoRF for validation, and the reviewer reports reductions in computation time and memory usage. The reviewer acknowledges and appreciates the effectiveness of the proposed method and looks forward to future discussions on the generality of the proposed method (other backbones and learning time applications).
Strengths: - As emphasized in the paper, the approach proposed in this paper is a replacement of integral computations, not a learning of sampling or a change in data structure. Therefore, it is applicable and highly available for various NeRF variants that use volume rendering.
- The position of the proposed method in NeRF research is clearly stated, especially the related work is very clearly described as an introduction to the proposed method.
- In the introduction of the method, the reviewer's first question was l.177, and the paper answers the question. This helps the understanding of the paper and increases the credibility of the proposed method.
Weaknesses: - The proposed method results in dense sampling near the surface, but can it handle translucent objects? Intuitively, there seems to be a correlation between density solidity and rendering quality. It would be desirable to discuss for which scenes the proposed method is effective and for which scenes it is not.
- The effectiveness of the proposed method as an integration method for oracle or trained models is certain. In general, however, volume rendering is used for both training and inference. It would be better to discuss whether the proposed method is specialized for inference or whether it could be used for learning as well.
- We expect the availability of the proposed method to be very broad. If possible, the application of the proposed method to NeRF backbones other than vanilla NeRF and TensoRF could be considered to further demonstrate the generality of the proposed method.
- The correspondence between the plots in Figure 4 is not clear. It would be easier to compare the results if the corresponding scenes of vanilla and the proposed method were connected by a line.
- Small Comment: Related works -> Related work
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the proposed method be used for learning? If so, the proposed method could be a very powerful tool. (identical to the description in the Weaknesses section).
- Why are the results for TensoRF not shown in Figure 4?
- The sample points up to 8 are very small compared to the use of more than 100 sample points in vanilla NeRF. Could the performance of vanilla NeRF be exceeded with more sample points? In other words, vanilla NeRF also approximates integration by discretization, and this sampling density is constant. We believe that a comparison of vanilla NeRF and the proposed method in terms of sample points vs. image quality will more convincingly demonstrate the effectiveness of the proposed method.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - I agree with the limitations described in the conclusion. This reviewer believes that the proposed method contributes to real-time rendering, but it better to be validated to make this claim in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for dedicating your time and providing such perceptive feedback. Your recommendations have considerably enhanced our work. Based on your comments, we have provide a brand new general framework for computing volume rendering and validate its accessibility on 2 baselines. We will address your concerns below.
> The proposed method results in dense sampling near the surface, but can it handle translucent objects? Intuitively, there seems to be a correlation between density solidity and rendering quality. It would be desirable to discuss for which scenes the proposed method is effective and for which scenes it is not.
Since volume rendering is originally derived for pure volumetric data, it is inherently not suitable for modeling translucent objects. To verify this argument, we conduct experiments using TensoRF on the DexNeRF dataset[5] and report the result here.
| Method | PSNR | SSIM | LPIPS |
| -------------- | ----- | ---- | ----- |
| TensoRF | 24.02 | 0.86 | 0.288 |
| Ours + TensoRF | 23.99 | 0.86 | 0.298 |
The result shows that our approach performs similarly to traditional volume rendering methods. However, it is ideal that the scenes are pure volumetric since TensoRF itself suffers at modeling translucent objects.
> The effectiveness of the proposed method ... is certain. In general, however, volume rendering is used for both training and inference. It would be better to discuss whether the proposed method ... could be used for learning as well.
Since our focus is the training-free aspect of the proposed method, the results of network training with it are not reported in the paper. However, GL-NeRF can be used for learning as well. We conduct experiments combining GL-NeRF with Vanilla NeRF and show the results here. The training time is also reduced by $1.2\times$ to $2\times$.
---
Blender
| Method | PSNR | SSIM | LPIPS |
| ------------------------------- | ----- | ---- | ----- |
| Vanilla NeRF | 30.63 | 0.95 | 0.037 |
| Ours + Vanilla NeRF (training) | 29.18 | 0.93 | 0.056 |
| Ours + Vanilla NeRF (test-only) | 28.56 | 0.93 | 0.070 |
LLFF
| Method | PSNR | SSIM | LPIPS |
| ------------------------------- | ----- | ---- | ----- |
| Vanilla NeRF | 27.62 | 0.88 | 0.073 |
| Ours + Vanilla NeRF (training) | 27.21 | 0.87 | 0.087 |
| Ours + Vanilla NeRF (test-only) | 26.53 | 0.85 | 0.090 |
---
> We expect the availability of the proposed method to be very broad. If possible, the application of the proposed method to NeRF backbones other than vanilla NeRF and TensoRF could be considered to further demonstrate the generality of the proposed method.
Please refer to the general response for the combination of GL-NeRF with Instant NGP.
> The correspondence between the plots in Figure 4 is not clear. It would be easier to compare the results if the corresponding scenes of vanilla and the proposed method were connected by a line.
> Small Comment: Related works -> Related work
Thank you for the construction comments! We’ll modify the figure and writing as suggested.
> Why are the results for TensoRF not shown in Figure 4?
As mentioned in MCNeRF, NVIDIA GPUs have specialized components to accelerate the neural network inferences, therefore evaluating on these devices with more sample points may not cost that much. Therefore we instead implement a WebGL-based renderer as WebGL is a more accessible platform that is agnostic to the underlying hardware[1, 2, 3, 4]. Please find the result in the General Response section.
> The sample points up to 8 are very small compared to the use of more than 100 sample points in vanilla NeRF. Could the performance of vanilla NeRF be exceeded with more sample points? In other words, vanilla NeRF also approximates integration by discretization, and this sampling density is constant. We believe that a comparison of vanilla NeRF and the proposed method in terms of sample points vs. image quality will more convincingly demonstrate the effectiveness of the proposed method.
Thanks for this very interesting point. When increasing the number of points used for GL-NeRF, since the predefined Laguerre weights approach to zero and are smaller than machine precision, the performance reaches a bottleneck. Therefore, we focus on the fact that GL-NeRF, with higher precision, can reduce the sample points by a large margin. An example of the Gauss Laguerre Quadrature look-up table with n=64 is presented here. Notice that when n gets bigger, the weights get extremely small and would become smaller than machine precision (for 32-bit float, it’s $1.175494 × 10^{-38}$). We do have a comparison in terms of sample points vs. image quality with a smaller number of points shown here.
| n | weight | x |
| ---- | --------------- | -------------- |
| 1 | $0.0563$ | $0.0224$ |
| 2 | $0.119$ | $0.118$ |
| 3 | $0.157$ | $0.290$ |
| ... | ... | ... |
| 62 | $1.592\times 10^{-88}$ | $204.672$ |
| 63 | $2.989\times 10^{-94}$ | $218.032$ |
| 64 | $2.089\times 10^{-101}$ | $234.810$ |
>[1] Gupta, Kunal, et al. "MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs." SIGGRAPH Asia 2023 Conference Papers. 2023.
>[2] Chen, Zhiqin, et al. "Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
>[3] Reiser, Christian, et al. "Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes." ACM Transactions on Graphics (TOG) 42.4 (2023): 1-12.
>[4] Yariv, Lior, et al. "Bakedsdf: Meshing neural sdfs for real-time view synthesis." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
>[5] Ichnowski, Jeffrey, et al. "Dex-NeRF: Using a neural radiance field to grasp transparent objects." arXiv preprint arXiv:2110.14217 (2021).
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed point-by-point responses from the authors. Especially, the additional results suggest that the proposed method is useful for learning, which is very important result. Although a more detailed verification of the stability of the learning and the dependence on initial values is needed in my opinion, I think this is beyond the scope of the paper.
At this point I have no further questions. Given the content of the rebuttal, my current judgment is to increase the score. I will carefully monitor the progress of other discussions and reevaluate if necessary.
---
Rebuttal 2:
Title: Thank You for Your Recognition and Positive Recommendation
Comment: We truly value your acknowledgment of our work and your recognition of our responses. Your insightful feedback and favorable recommendation are deeply appreciated. | Summary: This paper focuses on accelerating novel view synthesis using neural radiance fields (NeRF). Unlike previous works that concentrate on designing lightweight networks, this study is motivated by the specific volume rendering formula, which includes a negative exponential term in the integration function. By employing Gauss-Laguerre quadrature, the authors approximate this complex integral operation, thus improving the rendering speed of existing NeRFs. This approach is validated on two backbones: the original NeRF and TensoRF, demonstrating speed improvements ranging from 1.2X to 2X.
Strengths: 1. The idea is intriguing and represents a promising exploration originating from the specific volume rendering formula.
2. This paper is highly theoretical.
Weaknesses: see Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the experiment, the authors validated the proposed method on two backbones (the original NeRF and the TensoRF). However, neither of these represents the current fastest method. It is suggested to compare the proposed method with Instant NGP, DVGO, or other faster alternatives. Such comparisons could not only better verify the plug-and-play capability but also significantly enhance the impact of the paper.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and thoughtful feedback you've given. Your suggestions have greatly improved our work. Considering your insights, we have provided a highly theoretical framework for calculating volume rendering and validated it using 2 baselines. We will address your concerns below.
>In the experiment, the authors validated the proposed method on two backbones (the original NeRF and the TensoRF). However, neither of these represents the current fastest method. It is suggested to compare the proposed method with Instant NGP, DVGO, or other faster alternatives. Such comparisons could not only better verify the plug-and-play capability but also significantly enhance the impact of the paper.
Since Instant NGP is a representative of grid-based representation for volume density, we combined GL-NeRF with Instant NGP as suggested and compared between the versions. We believe doing so would provide a more comprehensive perspective for the plug-and-play attribute of GL-NeRF. As can be seen from the table, our method achieves comparable performance with instant NGP using only $\frac{1}{8}$ number of color MLP calls. Since the experiment using TensoRF on WebGL has shown that reducing the color MLP calls can lead to significant speedup, here we simply showcase the color MLP calls needed in the table. The experiment indicates that as long as the underlying radiance fields pipeline relies on volume rendering, GL-NeRF can be an ***off-the-shelf replacement*** for dense sampling / hierarchical sampling, agnostic to the underlying representation.
| Method | PSNR | Avg. color MLP calls |
| ------------------ | ----- | --------------------- |
| Instant NGP | 32.05 | 30.90 |
| Ours + Instant NGP | 30.35 | 4.00 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and additional experiments. All of my concerns have been addressed. It is a good exploration by replacing the classical volume rendering with Gauss-Laguerre quadrature.
---
Rebuttal 2:
Comment: We greatly appreciate your recognition of our work's novelty. We sincerely thank you for your thoughtful feedback and are grateful for your positive recommendation.
Title: Grateful for Your Recognition and Positive Recommendation | Summary: This paper presents a method for reducing the number of color samples needed for volume rendering in Neural Radiance Fields. The method works by applying Gauss-Laguerre quadrature as a replacement for the importance sampling used by some NeRF methods to reduce the number of calls needed for their fine color MLPs. This approach is shown to yield some efficiency improvements on plain NeRF as well as TensoRF models.
Strengths: The mathematical aspects of the paper are detailed and well supported. It is very clear what the method is trying to achieve and how the Gauss-Laguerre formulation is being applied. Generally, the quality of explanation is good and not too hard to follow, even though the math is fairly dense.
Overall, I think this is an original idea with some likely applications in volume rendering.
Weaknesses: The main issue with this paper is that the experiments do little to give an idea of how the proposed method would compare to the current state of the art, which has progressed quite significantly beyond the baselines shown here. While it is quite believable that the GL method outperforms the naive hierarchical sampling of the original NeRF, I have significant doubts about that holding when applied to something more recent like the proposal networks from Mip-NeRF 360 and Zip-NeRF.
Given the weak evaluation, I would lean towards rejecting.
Technical Quality: 3
Clarity: 3
Questions for Authors: It is strange that timings are reported for vanilla NeRF but not TensoRF, which would presumably see a larger gain. Is there a reason for this?
I would suggest that the authors try to show quantifiable improvement on a more recent baseline. If there were a substantial run-time speed up for TensoRF, that would be good, but something lime Mip-NeRF 360 or Zip-NeRF would be even better as that would show how GL compares to another method which tries to draw samples near the surface.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: It is mentioned, but I think the writing could make it a lot more clear that only the color samples are being reduced, as this significantly affects where the method would actually be expected to provide a speedup. As it is, this is quite easy to miss and could lead to misunderstanding if one does not read the method carefully.
I don't think there are any notable concerns with the paper regarding societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your time and insightful comments. Your valuable suggestions have significantly elevated our work. In light of your comments, we have proposed a brand new mathematical perspective for volume rendering and proved its effectiveness on speedup using two baselines. We will address your concerns below.
> The main issue with this paper is that the experiments do little to give an idea of how the proposed method would compare to the current state of the art.
We validated it with 3 baseline approaches (Vanilla NeRF, TensoRF and InstantNGP in the rebuttal) and show that our method has a plug-and-play attribute. In the experiments, we primarily aim to demonstrate GL-NeRF’s plug-and-play attribute, and can provide comparable performances as well as reducing the sample points, which results in speedup as a free lunch. Moreover, we’d like to point out that our paper’s main contribution is on the mathematical perspective itself.
> proposal networks from Mip-NeRF 360 and Zip-NeRF.
While Mip-NeRF 360 focuses on anti-aliasing and Zip-NeRF is its combination with Instant NGP, we instead conduct experiments by combining GL-NeRF with InstantNGP since our focus is on the volume rendering integral itself. The results can be found in the general response and we paste it here for better reference. The result along with the results on Vanilla NeRF and TensoRF indicates that our method can be incorporated into any NeRF pipeline that relies on volume rendering regardless of the underlying representation.
| Method | PSNR | Avg. color MLPs calls |
| ------------------ | ----- | --------------------- |
| Instant NGP | 32.05 | 30.90 |
| Ours + Instant NGP | 30.35 | 4.00 |
On the other hand, our method needs no training, therefore can be plugged into any existing pipeline, which is the main advantage of our work compared to e.g. the proposal networks from Mip-NeRF 360 and Zip-NeRF.
> It is strange that timings are reported for vanilla NeRF but not TensoRF, which would presumably see a larger gain. Is there a reason for this?
As mentioned in MCNeRF, NVIDIA GPUs have specialized components to accelerate the neural network inferences, therefore evaluating on these devices with more sample points may not cost that much. Therefore we instead implement a WebGL-based renderer as WebGL is a more accessible platform that is agnostic to the underlying hardware[1, 2, 3, 4]. The result is in the General Response section and we paste it here for better reference. As can be seen from the table, our method achieves comparable quality as TensoRF and achieves almost real-time on WebGL running on a AMD Ryzen 9 5900HS CPU ***by only reducing the sampling points***.
| Method | PSNR | SSIM | LPIPS | FPS |
| -------------- | ----- | ---- | ----- | ----- |
| TensoRF | 33.28 | 0.97 | 0.016 | 5.84 |
| Ours + TensoRF | 33.09 | 0.97 | 0.016 | 22.34 |
>It is mentioned, but I think the writing could make it a lot more clear that only the color samples are being reduced, as this significantly affects where the method would actually be expected to provide a speedup. As it is, this is quite easy to miss and could lead to misunderstanding if one does not read the method carefully.
Thank you for this construction feedback. We'll modify the manuscript to make the point clearer.
>[1] Gupta, Kunal, et al. "MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs." SIGGRAPH Asia 2023 Conference Papers. 2023.
>[2] Chen, Zhiqin, et al. "Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
>[3] Reiser, Christian, et al. "Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes." ACM Transactions on Graphics (TOG) 42.4 (2023): 1-12.
>[4] Yariv, Lior, et al. "Bakedsdf: Meshing neural sdfs for real-time view synthesis." ACM SIGGRAPH 2023 Conference Proceedings. 2023.
---
Rebuttal Comment 1.1:
Title: We sincerely appreciate your time and valuable feedback
Comment: Dear Reviewer GERG,
We sincerely appreciate the time and effort you have dedicated to providing valuable feedback. As the discussion period concludes on Tuesday, August 13, please let us know if there are any remaining questions or if further clarifications are needed. We would be more than happy to provide any additional details.
Best Regards,
Authors | Rebuttal 1:
Rebuttal: # General Response
We’d like to thank all the reviewers for their valuable feedback, especially for acknowledging our contributions regarding the theoretical foundation and the effort we made to validate and show the plug-and-play attribute of GL-NeRF. Regarding common concerns among reviewers, we have added some experiments and presented the results to address these concerns.
## TensoRF Speedup Report
As mentioned in MCNeRF[1], NVIDIA GPUs have specialized components to accelerate the neural network inferences. Therefore, on those hardwares evaluating extra MLPs at more sample points may not incur as significant a cost as that on WebGL. Therefore we implemented a WebGL-based renderer and loaded TensoRF into it to test out the speedup GL-NeRF gives us. We follow MCNeRF to train a small TensoRF on LEGO scene in Blender Dataset that could fit in WebGL-based renderer and report its performance here. Our method achieves almost real-time performance in WebGL with similar quality as TensoRF running on a AMD Ryzen 9 5900HS CPU by reducing the sampling points for color MLP.
| Method | PSNR | SSIM | LPIPS | FPS |
| -------------- | ----- | ---- | ----- | ----- |
| TensoRF | 33.28 | 0.97 | 0.016 | 5.84 |
| Ours + TensoRF | 33.09 | 0.97 | 0.016 | 22.34 |
## Application of GL-NeRF to Other Approaches
To demonstrate the plug-and-play attribute of GL-NeRF to other approaches, we combine GL-NeRF and Instant NGP[2] for evaluation. The results are shown in the table below. ***The reason we choose Instant NGP is that it is another representative of NeRF pipeline that relies on volume rendering and we want to highlight that GL-NeRF can be incorporated into any NeRF pipeline that relies on volume rendering, agnostic to the underlying representation.*** Since the result on WebGL using TensoRF proved that reducing the number of sample points can lead to significant speedup, here we simply give the MLP calls needed for reference.
| Method | PSNR | Avg. color MLPs calls |
| ------------------ | ----- | --------------------- |
| Instant NGP | 32.05 | 30.90 |
| Ours + Instant NGP | 30.35 | 4.00 |
>[1] Gupta, Kunal, et al. "MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs." SIGGRAPH Asia 2023 Conference Papers. 2023.
>[2] Müller, Thomas, et al. "Instant neural graphics primitives with a multiresolution hash encoding." ACM transactions on graphics (TOG) 41.4 (2022): 1-15.
Pdf: /pdf/ffde002f638d6707ec76bb199db593ba55263a2d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation | Accept (poster) | Summary: The paper presents a novel approach to cross-domain few-shot semantic segmentation (CD-FSS) by introducing a lightweight frequency masker that aims to improve performance by filtering different frequency components for target domains. The authors propose an amplitude-phase-masker (APM) module and an adaptive channel phase attention (ACPA) module to reduce inter-channel correlations and enhance feature robustness against domain gaps.
Strengths: 1. The paper addresses a relevant and challenging problem in the field of few-shot semantic segmentation, particularly in cross-domain scenarios where performance typically suffers due to domain shifts.
2. The proposed lightweight frequency masker, including the APM and ACPA modules, introduces a novel perspective on feature disentanglement in the frequency domain, which is a promising direction for improving generalization across domains.
3. The authors provide a thorough interpretation of the phenomenon of frequency filtering and its impact on feature channel correlations, which is well-supported by mathematical derivations and empirical evidence.
4. The paper includes extensive experiments on four target datasets, demonstrating the effectiveness of the proposed method in reducing domain gaps and improving segmentation performance.
Weaknesses: 1. The novelty of the approach among existing work could be better established. It would be better to give a more detailed comparison with state-of-the-art methods that also attempt to address domain shifts in few-shot segmentation, especially those through the frequency operations.
2. The paper does not discuss the computational efficiency of the proposed method, which is an important consideration for practical applications. It would be beneficial to include details on the runtime and resource requirements of the approach.
3. Since overfitting is an important issue in the few-shot finetuning, I wonder how this method could benefit the model in reducing the overfitting.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Compare with other frequency-based methods**
We answered this question in the global response. We hope this could resolves your concerns.
**2. The complexity analysis**
We present the results of the complexity analysis, showing that our APM and ACPA are extremely lightweight, modules with minimal parameters and computational overhead. Our experiments were conducted on a single 4090 GPU. We also compared it with a lightweight frequency-based method (DFF [1]), which further highlights our advantages in terms of computational overhead and parameters.
| | baseline (encoder + decoder) | APM-S | APM-M | ACPA | DFF [1] |
| :-------: | :--------------------------: | :---: | :---: | :---: | :-----: |
| Params(K) | 26174 (23600 + 2574) | 0.338 | 692 | 65.54 | 2100 |
| | baseline | ours (APM-S) | ours (APM-M) | DFF [1] |
| :-------: | :------: | :----------: | :----------: | :-----: |
| FLOPs (G) | 20.11 | 20.17 | 20.26 | 22.07 |
[1] Deep Frequency Filtering for Domain Generalization, CVPR2023
**3. Our method could benefit the model in reducing the overfitting**
We believe that overfitting in few-shot fine-tuning arises from two main reasons: 1) The extremely limited samples (few-shot) prevent the model from fully learning each feature, making it prone to fitting extreme features (noise) and overly relying on feature correlations (e.g., if the samples are red apples, the model binds red color and round shape, and fails if the test sample is a green apple). 2) Intra-class variations (such as viewing angles, transparency, and distances) hinder the model's ability to recognize the same features accurately.
Our APM addresses the first issue by reducing the correlation between features and eliminating channel bias (eliminating extreme features). ACPA tackles the second issue by leveraging the phase's invariant information to minimize intra-class variations. Consequently, our approach effectively mitigates overfitting in few-shot fine-tuning.
---
Rebuttal Comment 1.1:
Comment: concerns addressed
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! If you have further questions, please feel free to tell us. We will continue to polish our work in the final version! | Summary: This paper discover a phenomenon that simpy filtering different frequency components for target domains can lead to a significant performance improvements. Then the paper delve into this phenomenon for an interpretation, and propose an approach based on this phenomenon, which achieves futher performance improvements. The proposed method includes an amplitude-phase-masker (APM) module and an Adaptive Channel Phase Attention (ACPA) module, which are lightweight but effective as validated by experiments.
Strengths: 1. The paper identifies an intriguing phenomenon where frequency filtering leads to performance gains in CD-FSS, which is a novel contribution to the field.
2. The proposed lightweight frequency masker introduces minimal additional parameters (0.01%) yet achieves significant performance improvements (over 10% on average), which is a strong practical contribution.
3. The paper includes extensive experiments on four target datasets, demonstrating the effectiveness of the proposed method.
Weaknesses: 1. Can this method be applied to other domains or tasks such as Cross domain few shot learning, domain generalization?
2. It would be helpful to include a sensitivity analysis on the choice of frequency components to filter, to understand the robustness of the method to different filtering strategies.
3. Some existing methods such as GFNet also applied filtering on the frequency domain, it would be better to compare with these methods, both in the related work and in the experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How this method can help other tasks such as cross domain few-shot learning, domain generalization, or few-shot object detection?
2. How is this work compared to other works applying frequency operation?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper addresses the CDFSS task from the aspect of frequency analysis, however, solely the frequecy analysis is not an novel aspect. This paper lacks discussion about the difference with previous works regarding the frequency analysis. But in all, I still recognize the novelty and contribution of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Our method can be applied to other tasks**
Our method can also be applied to cross-domain few-shot learning (CDFSL). Following BSCD-FSL[1] we implemented our method under this task setting (5-way 1-shot), and experimental results show that our method is effective in CDFSL as well.
| | CropDisease | EuroSAT | ISIC | ChestX | Ave. |
| --------------- | :---------: | :-----: | :---: | :----: | :---: |
| baseline [1] | 73.39 | 66.12 | 35.07 | 21.98 | 49.14 |
| baseline + ours | 82.01 | 68.95 | 38.86 | 24.07 | 53.47 |
[1] A Broader Study of Cross-Domain Few-Shot Learning
**2. Sensitivity analysis on the choice of frequency components to filter**
First, we visualized the average masker results for each domain to observe the filtered frequency components, as shown in the global rebuttal PDF. We found that the masker effectively adjusts to filter different frequency components according to different domains.
Then, we validated the robustness of APM by adding gaussian noise during its adaptive process. Even with the added noise, APM could still dynamically adjust and filter out the frequency components detrimental to the current domain, demonstrating its robustness.
| | FSS | Deep | ISIC | Chest | Ave. |
| :---------: | :---: | :---: | :---: | :---: | :---: |
| baseline | 77.54 | 33.19 | 32.65 | 47.34 | 47.68 |
| APM | 79.29 | 40.86 | 41.71 | 78.25 | 60.03 |
| APM + noise | 79.03 | 40.06 | 40.82 | 77.92 | 59.46 |
**APM's initialization** We also explored different initialization strategies for APM. A value of 0 means no frequency components pass through, while a value of 1 means all frequency components pass through. "Rand" indicates random values uniformly distributed in [0,1], "gauss" indicates values drawn from a normal distribution, "clamp" indicates values clipped to [0,1], and "line" indicates values scaled linearly to [0,1]. The experimental results show that our APM is robust, quickly adjusting and adapting even with an initial value of all zeros. Our default initialization strategy is all ones, meaning all frequency components pass through initially, which also facilitates the dynamic adjustment and learning of APM.
| | FSS | Deep | ISIC | Chest | Ave. |
| :--------------: | :---: | :---: | :---: | :---: | :---: |
| baseline | 77.54 | 33.19 | 32.65 | 47.34 | 47.68 |
| **one (choose)** | 79.29 | 40.86 | 41.71 | 78.25 | 60.03 |
| zero | 76.7 | 35.32 | 40.63 | 76.09 | 57.19 |
| rand | 78.93 | 40.74 | 41.49 | 77.56 | 59.68 |
| gauss (clamp) | 78.26 | 39.43 | 41.38 | 76.89 | 58.99 |
| gauss (line) | 78.82 | 40.46 | 41.54 | 77.85 | 59.67 |
**3. Compare with other frequency-based methods**
We answered this question in the global response. We sincerely hope this could resolve your concerns.
Here, we provide a more detailed explanation of the differences between our work and GFNet (experimental results are in the global rebuttal table):
1) The motivation of GFNet is to use global frequency filters to replace self-attention or MLPs, reducing computational overhead while removing inductive biases and maintaining a large receptive field (which helps capture long-term dependencies). This is reasonable because a spatial location in the frequency domain represents global information. In contrast, our work is motivated by the observation that different frequency components play different roles in different domains; a frequency component beneficial in domain A might be harmful in domain B. Therefore, we designed an adaptive masker to dynamically filter different frequency components according to different domains. We also explored and validated the relationship between frequency and feature correlation.
2) GFNet's "filter" refers to a convolutional filter, which can be seen as a stack of multiple convolution operations (a multiplication operation in the frequency domain can be replaced by multiple convolution operations in the spatial domain), with values in the range (-∞, +∞). In contrast, our masker is used to filter frequency components, with values in the range [0,1].
---
Rebuttal Comment 1.1:
Title: The discussion phase ends soon, please consider participate ASAP
Comment: Dear Reviewer H2xr,
Please be reminded that the Author-Reviewer discussion phase will end very soon (in ONE day). Please take a look at the authors' rebuttal, see if they addressed your concerns. If you have any further questions/concerns, please post them ASAP, so that the authors may have time to respond to them!
Thanks,
AC | Summary: This paper presents a novel approach to cross-domain few-shot semantic segmentation (CD-FSS) by introducing a lightweight frequency masker. This masker aims to enhance the robustness of models against domain gaps by filtering different frequency components during the testing phase. The authors claim that their method significantly improves performance, sometimes by as much as 14%, without the need for extensive retraining or parameter tuning.
Strengths: This paper introduces a novel frequency masker that is lightweight and does not require training on the source domain.
The authors provide a clear explanation of the phenomenon where frequency filtering improves performance, supported by mathematical derivations and experiments.
This paper demonstrates significant performance improvements on multiple target datasets, which is a strong empirical contribution.
The proposed APM and ACPA modules are innovative and show promise in addressing the domain gap problem in few-shot segmentation.
Weaknesses: While the paper claims to reduce inter-channel correlation, why not directly constrain the model to reduce such correlation? How is this work compared with [27]?
The paper could benefit from a more thorough comparison with state-of-the-art methods, particularly those that also employ frequency domain techniques.
The authors might consider providing more details on the experimental setup, including data preprocessing and model training procedures, to ensure reproducibility.
Technical Quality: 4
Clarity: 4
Questions for Authors: Could other frequency-based works also achieve the correlation reduction?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: An important part of the analysis is the correlation reduction. However, the paper only includes a comparison with the reduction method by mutual information. Many other methods can also achieve this goal. In my opinion, the mathematical deduction only proves that the frequency operation is able to reduce the correlation, but does not show its advantages in such reduction. Therefore, I would like to see the author provide more comparison and analysis regarding this problem, such as directly comparing with [27].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Compare with directly constraining the model**
We answered this question in the global response. We sincerely hope this could resolve your concerns.
**2. Comparing with [27]Channel Importance Matters in Few-Shot Image Classification**
We compared with [27] in the global response, and here we provide a more detailed explanation.
We implemented the two transformation methods from [27] under our task setting. The performance slightly declines on the FSS dataset, which is similar to the source domain. However, on the other three datasets that are more distant from the source domain, there is a slight performance improvement. Nonetheless, our method has an advantage in terms of performance, with improvements significantly surpassing those of [27].
We further elaborate on the differences between our work and [27]:
1)[27] found that different channels recognize different patterns, and the channel bias present between channels can affect the model's recognition ability. They improved performance by eliminating this channel bias through feature transformation functions. In contrast, our work posits that different frequency components play different roles in different domains. We dynamically filter out detrimental frequency components based on the domain, thereby reducing channel correlation and improving performance.
2)Compared to [27]'s spatial operations, our frequency operations have the advantage of better representing global information. A spatial position in the frequency domain represents information from the entire spatial domain, giving frequency domain operations a natural advantage in capturing long-term dependencies and maintaining a large receptive field. Additionally, compared to feature transformation and convolution operations in the spatial domain, frequency domain operations remove inductive biases.
**4. Compare with other frequency-based methods**
We answered this question in the global response. We hope this could resolve your concerns.
**5. Could other frequency-based works also achieve the correlation reduction**
We tested the MI of the aforementioned frequency-based methods, and not all of them were able to achieve correlation reduction. They achieved correlation reduction in certain domains because they are used during training to enhance the model's generalization. However, due to the domain gap, feature extraction patterns that perform well on the source domain may not benefit the target domain. For example, DFF explores frequency components beneficial for generalization during training but its results show it filters out a lot of high frequencies and retains low frequencies. Therefore, its performance improvement might be due to filtering out noise in high frequencies. We visualized how our masker filters frequency components across different domains (the global rebuttal PDF displays these results). The frequency components to be filtered differ among target domains. Additionally, phase and amplitude need to be considered separately rather than being treated as a single entity. Hence, not all frequency-based methods can achieve correlation reduction.
| MI | FSS | Deep | ISIC | ChestX |
| -------- | :----: | :----: | :----: | :----: |
| baseline | 1.3736 | 1.3679 | 1.3789 | 1.3952 |
| DFF | 1.3742 | 1.3701 | 1.3702 | 1.3429 |
| GFN | 1.384 | 1.3682 | 1.3781 | 1.3605 |
| ARP-SP | 1.3705 | 1.3568 | 1.3713 | 1.3488 |
| DAC-SC | 1.3722 | 1.3557 | 1.3676 | 1.3526 |
| ours | 1.3501 | 1.2761 | 1.3139 | 1.2610 |
**6. More details on the experimental setup**
Here, we provide a more detailed explanation of our data processing method. We adopt the same setup and data processing as PATNet[22]. For FSS-1000, the official split for semantic segmentation is used in our experiment. We report the results on the official testing set, which contains 240 classes and 2,400 testing images. For Deepglobe, the images have a fixed resolution of 2448 × 2448 pixels. To increase the number of testing images and reduce their size, each image was cut into 6 pieces. This cutting has minimal effect on segmentation due to the irregular shapes of the categories. After filtering out single-class images and the 'unknown' class, we obtained 5,666 images, each with a resolution of 408 × 408 pixels, for reporting the results. For ISIC, the images have a spatial resolution around 1022 × 767. We down-size the images to 512 × 512 pixels. For Chest X-ray, due to the large size of the image, we down-size the images to 1024 × 1024 pixels.
**7. Why is frequency operation more advantageous in reducing correlation**
The aforementioned (global response, answer2) orthogonality constraints, whitening, MMC, and MI Loss (discussed in the main text) all use spatial operations to reduce correlation. Here, we elaborate on the advantages of frequency operations compared to spatial operations.
1)The frequency domain inherently offers finer granularity compared to the spatial domain, facilitating more precise feature disentanglement. When a spatial domain channel (feature) is transformed into the frequency domain, each point in the frequency domain represents the global information of the feature. This results in a finer granularity transformation from 1 to hw points.
2)The frequency domain inherently provides a more lightweight operation compared to the spatial domain. A simple multiplication in the frequency domain can be equivalent to multiple convolutions in the spatial domain. This makes modules operating in the frequency domain more lightweight, easier to adapt to different domains, and more advantageous when data is scarce.
3)The frequency domain inherently has a larger receptive field and better helps capture long-term dependencies, making it more effective for learning global information. This enables operations in the frequency domain to capture more independent channel patterns, leading to expanded activation regions and more generalized representations.
---
Rebuttal Comment 1.1:
Title: The discussion phase ends soon, please consider participate ASAP
Comment: Dear Reviewer 79M1,
Please be reminded that the Author-Reviewer discussion phase will end very soon (in ONE day). Please take a look at the authors' rebuttal, see if they addressed your concerns. If you have any further questions/concerns, please post them ASAP, so that the authors may have time to respond to them!
Thanks,
AC | Summary: This paper makes several notable contributions to the field of cross-domain few-shot segmentation (CD-FSS). The authors discover that filtering different frequency components for target domains can lead to significant performance improvements, attributing this to reduced inter-channel correlation in feature maps, which enhances robustness against domain gaps and expands activated regions for segmentation. Building on this insight, they propose a lightweight frequency masker comprising an Amplitude-Phase-Masker (APM) module and an Adaptive Channel Phase Attention (ACPA) module. These components effectively reduce channel correlations and further enhance segmentation performance. The proposed method demonstrates significant advancements over current state-of-the-art CD-FSS approaches, highlighting its potential impact on the field.
Strengths: This paper presents several notable advantages in the field of cross-domain few-shot segmentation (CD-FSS). It identifies a significant performance improvement by filtering different frequency components for target domains, which reduces inter-channel correlation in feature maps and enhances robustness against domain gaps. The proposed lightweight frequency masker, consisting of the Amplitude-Phase-Masker (APM) and Adaptive Channel Phase Attention (ACPA) modules, effectively reduces channel correlations and improves segmentation performance with minimal additional parameters. The authors also provide relevant mathematical derivations to support their findings. The method demonstrates substantial improvements over state-of-the-art CD-FSS methods, making it a significant contribution to the field.
Weaknesses: - Performing frequency domain filtering on features is likely to result in some loss of information, potentially damaging the original structure of the features. Moreover, the mask weights required for different domains should vary. Are the authors training the APM on the source domain and then directly testing it on different target domains?
- The novelty of the method is limited. The idea proposed by the authors is very similar to [1] and seems to merely apply cross-domain techniques to cross-domain few-shot segmentation.
- The method proposed by the authors shows very limited improvement on some datasets and even performs worse than the existing state-of-the-art (SOTA) methods.
[1] Deep Frequency Filtering for Domain Generalization CVPR2023
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors claim that filtering certain frequency components can lead to significant performance improvements. How sensitive is this improvement to the specific frequency components chosen? Is there a systematic way to determine the optimal frequency components for a given domain?
2. The paper introduces the Amplitude-Phase Masker (APM) module. How does the initialization of the APM affect the final performance? Have the authors explored different initialization strategies?
3. The Adaptive Channel Phase Attention (ACPA) module uses phase information for attention weights. What is the rationale behind using only phase information rather than both phase and amplitude? How would the results change if amplitude information were incorporated?
4. The paper claims that the proposed method reduces inter-channel correlation in feature maps. How does this reduction in correlation compare to other feature decorrelation methods in the literature, such as those based on orthogonality constraints or whitening?
5. The authors use mutual information to measure inter-channel correlation. Are there other metrics that could provide additional insights into the nature of the feature disentanglement achieved by this method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Computational overhead: The paper does not adequately address the computational cost of the proposed method. Frequency domain operations and the additional modules (APM and ACPA) likely introduce significant computational overhead, which should be quantified and compared to existing methods.
2. Ablation studies: The paper would benefit from more comprehensive ablation studies. For instance, the individual contributions of APM and ACPA are not clearly delineated, and the impact of different design choices within these modules is not thoroughly explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Filtering on features damages the original feature structure?**
Filtering certain frequency components does not damage the original feature structure; instead, it is beneficial. Since not all frequencies are advantageous for the current domain, we dynamically adjust the mask and take its inverse (1-mask). We find that performance decreases compared to the baseline, indicating that the frequencies we filter out are indeed detrimental components.
| | FSS | Deep | ISIC | Chest | Ave. |
| :-----------------: | :---: | :---: | :---: | :---: | :---: |
| baseline | 77.54 | 33.19 | 32.65 | 47.34 | 47.68 |
| APM (w/o ACPA) | 78.98 | 40.81 | 38.99 | 77.73 | 58.86 |
| Inv. APM (w/o ACPA) | 77.25 | 30.26 | 31.23 | 47.07 | 46.45 |
**2. Sensitivity analysis on the choice of frequency components to filter**
Since different domains require different weights, our method adapts directly to the target domain without the need for source-domain training. We visualized the average masker results for each domain to observe the filtered frequency components, as shown in the global response PDF. We found that the masker effectively adjusts to filter different frequency components according to different domains.
**3. Differences between our approach and Deep Frequency Filtering for Domain Generalization (DFF):**
We compare with DFF in global response answer 1. Here we provide a more detailed explanation.
1) Motivation: DFF aims to explore and retain frequency information beneficial for generalization during training, while filtering out frequencies that are not. However, we found that useful frequency information varies across different domains; frequencies beneficial to one domain may be harmful to others. Therefore, we focus on adaptively selecting beneficial information for different domains.
2) Amplitude and Phase: DFF does not distinguish between amplitude and phase, using attention mechanisms to filter out non-generalizable frequency components during training. However, amplitude and phase play different roles: amplitude contains domain-specific information, while phase contains domain-invariant information. Our APM independently adjusts amplitude and phase, filtering out detrimental frequency information separately. ACPA leverages the domain-invariant characteristic of phase to reduce intra-class variance between support and query.
3) Effectiveness: DFF performs well when input distributions differ but the label space remains the same. However, its effectiveness is limited in our task, where both input distributions and label spaces differ. We implemented DFF in our task, and our method demonstrated superior performance (see Q1 in global response).
**4. Performance could be further improved after segmentation refinement**
To highlight the effectiveness of our method, we did not employ methods such as data augmentation, or segmentation refinement. We used the segmentation refinement of PANet [37], which led to further performance improvements. Our method already surpasses the existing SOTA and shows significant improvement over the baseline, even without using any additional methods.
| | FSS | | Deep | | ISIC | | Chest | | Avg | |
| :----------------: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |
| Method | 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot |
| baseline | 77.54 | 80.21 | 33.19 | 36.46 | 32.65 | 35.09 | 47.34 | 48.63 | 47.68 | 50.10 |
| PATNet [22] (SOTA) | 78.59 | 81.23 | 37.89 | 42.97 | 41.16 | 53.58 | 66.61 | 70.20 | 56.06 | 61.99 |
| APM-M | 79.29 | 81.83 | 40.86 | 44.92 | 41.71 | 51.16 | 78.25 | 82.81 | 60.03 | 65.18 |
| APM-M refine | 80.02 | 82.35 | 41.23 | 45.57 | 42.56 | 53.69 | 78.76 | 83.22 | 60.64 | 66.21 |
**5. The different initialization strategies of APM**
We presented this experiment in our response to question 2 for reviewer H2xr. Please refer to the AMP initialization experiments in question 2 for reviewer H2xr. We sincerely hope this could resolve your concerns.
**6. Why does ACPA only use the phase information**
Previous interpretability studies have shown that phase is an invariant representation, while the amplitude varies between samples and contains specific information. To alleviate intra-class variations (such as viewing angles, transparency, and distances, which hinder the model's ability to recognize the same features accurately), we leverage the invariant nature of phase to align the feature spaces of support and query.
| | Ave. 1-shot | Ave. 5-shot |
| :-----------: | :---------: | :---------: |
| + amplitude | 58.32 | 63.25 |
| w/o amplitude | 60.03 | 65.18 |
**7. Compare with other methods for reducing correlation/Compare with other frequency-based methods/More detailed about the individual contributions of APM and ACPA**
We answered these questions in the global response. We sincerely hope this could resolve your concerns.
**8. Other metrics validate that our method achieves feature disentanglement**
We normalize the feature map channels with L2 normalization and then compute the L1 norm to measure their sparsity. A smaller value indicates higher sparsity. After masking certain frequency components, the sparsity value decreases, indicating sparser features. Sparse features imply lower feature redundancy, which benefits feature disentanglement and thereby enhances the model’s generalization capability.
| sparsity | FSS | Deep | ISIC | ChestX |
| -------- | :---: | :---: | :---: | :----: |
| baseline | 31.85 | 32.41 | 32.4 | 31.86 |
| APM | 31.12 | 31.79 | 31.08 | 30.5 |
**9. The complexity analysis**
We answer this question in the reviewer upoB's question 2. We sincerely hope this could resolve your concerns.
---
Rebuttal Comment 1.1:
Title: Discussion phase ends soon, please consider participate ASAP
Comment: Dear Reviewer jSMF,
Please be reminded that the Author-Reviewer discussion phase will end very soon (in ONE day). Please take a look at the authors' rebuttal, see if they addressed your concerns. If you have any further questions/concerns, please post them ASAP, so that the authors may have time to respond to them!
Thanks,
AC | Rebuttal 1:
Rebuttal: **1. Compare with other frequency-based methods**
Here, we elaborate on the differences between our work and previous frequency-based methods.
DFF [1] explores and retains frequency information beneficial for generalization during training while filtering out frequencies that are not. GFNet [2] uses global frequency filters to replace self-attention or MLPs, reducing computational overhead while maintaining a large receptive field. ARP [3] proposes that a robust CNN should be resilient to amplitude variance and focus on the phase spectrum, thus introducing the Amplitude-Phase Recombination data augmentation method. DAC [4] proposes a novel normalization method, which eliminates style (amplitude) only as the preserving content (phase) through spectral decomposition. Although all these methods enhance the model's generalization ability, they do not effectively bridge large domain gaps.
Our motivation stems from the observation that filtering certain frequency components can significantly improve performance, while different frequency components have varying effects on different domains due to domain gaps. We delved into this phenomenon and discovered that operations in the frequency domain can reduce the correlation between channels, achieving feature disentangling. Therefore, our method does not require training on the source domain. Instead, it adaptively masks components that are detrimental to the current target domain (feature level). Additionally, we independently consider amplitude and phase rather than treating them as a whole, and we leverage the invariant characteristics of phase to design a channel attention module that addresses intra-class variations. Experimental results demonstrate that our method outperforms existing frequency-based methods in the CDFSS task.
| | FSS | Deep | ISIC | Chest | Ave. |
| ---------- | :-------: | :-------: | :-------: | :-------: | :-------: |
| baseline | 77.54 | 33.19 | 32.65 | 47.34 | 47.68 |
| DFF [1] | 78.18 | 32.16 | 35.71 | 60.29 | 51.59 |
| GFNet [2] | 76.86 | 32.23 | 33.95 | 53.12 | 49.04 |
| ARP-SP [3] | 78.83 | 35.06 | 35.61 | 59.83 | 52.33 |
| DAC-SC [4] | 78.27 | 35.98 | 36.02 | 57.66 | 51.98 |
| ours | **79.29** | **40.86** | **41.71** | **78.25** | **60.03** |
[1] Deep Frequency Filtering for Domain Generalization, CVPR2023
[2] Global Filter Networks for Image Classification, NeruIPS2021
[3] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain, ICCV2021
[4] Decompose, Adjust, Compose: Effective Normalization by Playing with Frequency for Domain Generalizatio, CVPR2023
**2. Compare with other methods of reducing correlation**
In the main text, we compared our method with MI Loss. Here, we further provide comparisons with orthogonality constraints, whitening, and MMC [27].
For methods that directly constrain the model (orthogonality, whitening): the few-shot setting means limited sample size, and existing models have a large number of parameters. Directly adjusting the model with constraints using such small datasets is not effective and, without careful tuning of hyperparameters, can lead to negative optimization. As seen, the performance of orthogonality constraints and whitening methods is not satisfactory.
For feature transformation/augmentation methods like MMC: the stability is not guaranteed because they use specific feature transformation functions. Due to the domain gap, a transformation method effective for one domain may not be effective for others. For example, MMC's performance on the FSS dataset not only failed to improve but declined. The MMC paper also mentioned that this method might experience performance degradation on certain datasets.
In contrast, our method has the advantages of being 1) lightweight (allowing for quick adaptation in a few-shot setting) and 2) stable and robust (with adaptive adjustments for different domains). These benefits are well reflected in the performance results.
| MIoU | FSS | Deep | ISIC | Chest | Avg |
| -------------------- | :---: | :---: | :---: | :---: | :---: |
| baseline | 77.54 | 33.19 | 32.65 | 47.34 | 47.68 |
| MMC (Simple) [27] | 77.48 | 34.70 | 34.32 | 48.74 | 48.81 |
| MMC (Oracle) [27] | 77.45 | 35.12 | 34.59 | 50.27 | 49.36 |
| baseline + orthogonality [1] | 78.13 | 34.61 | 34.05 | 50.58 | 49.34 |
| baseline + whitening | 77.92 | 33.22 | 32.98 | 50.89 | 48.75 |
| ours | **79.29** | **40.86** | **41.71** | **78.25** | **60.03** |
| MI | FSS | Deep | ISIC | Chest |
| -------------------- | :----: | :----: | :----: | :----: |
| baseline | 1.3736 | 1.3679 | 1.3789 | 1.3952 |
| MMC (Simple) [27] | 1.3742 | 1.3601 | 1.3782 | 1.3629 |
| MMC (Oracle) [27] | 1.3740 | 1.3582 | 1.3751 | 1.3605 |
| baseline + orth [1] | 1.3695 | 1.3611 | 1.3758 | 1.3590 |
| baseline + whitening | 1.3702 | 1.3668 | 1.3783 | 1.3577 |
| ours | **1.3501** | **1.2761** | **1.3139** | **1.2610** |
[1] Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?
[27] Channel Importance Matters in Few-Shot Image Classification
**3. More details about the individual contributions of APM and ACPA**
APM filters out negative frequency components at the feature level within feature maps. It leads to a feature map that is more robust, generalizable, and provides broader and more accurate representations. Adaptive Channel Phase Attention (ACPA) can be seen as a process of feature selection. Building on the APM-optimized feature map, ACPA encourages the model to focus on more effective channels (features) while aligning the feature spaces of the support and query samples.
Pdf: /pdf/7f2ee561625df34267b8891ef0697f7e168167d4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Construction and Application of Materials Knowledge Graph in Multidisciplinary Materials Science via Large Language Model | Accept (poster) | Summary: This paper introduces the Materials Knowledge Graph, a pioneering graph database designed for materials science. It leverages advanced NLP methods and LLMs to extract and organize a vast amount of high-quality research into structured triples. It streamlines the discovery process by organizing information into nodes and edges, enhancing data integration and reducing the need for traditional experiments. The MKG also employs algorithms to predict material applications, offering a significant advancement in accelerating materials research.
Strengths: 1. This paper demonstrates a thorough and systematic methodology, including data preparation, model training, entity resolution, and graph construction, ensuring a robust and credible knowledge graph.
2. The MKG shows a cutting-edge approach to parsing and structuring vast amounts of scientific literature, offering a significant advancement in accelerating materials research.
3. The application of link prediction algorithms for predicting material applications is a robust method for identifying new potentials in the field.
4. Experiments shows the effectiveness of the MKG and the employed models, enhancing the credibility and reliability of the results.
Weaknesses: 1. As the field of materials science evolves, maintaining the currency and accuracy of the MKG could become increasingly complex, requiring continuous updates and curation.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How can your methods be adapted for other scientific fields? Are there specific modifications needed?
2. What strategies do you have for continuously updating and curating the MKG to ensure its relevance and accuracy?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have made a commendable effort in addressing the limitations of their work. They acknowledge the dependency on manual annotation for data preparation, which could limit scalability and timeliness.
However, the paper could benefit from a more detailed discussion on maintaining the currency and accuracy of the MKG as the field of materials science evolves. Continuous updates and curation will be crucial, and outlining specific strategies for this would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the thorough review and valuable feedback on our manuscript. Your insights are highly appreciated and have been carefully considered to enhance our work. In this text, we will address each of your comments in detail, clarifying how we have addressed or plan to address the concerns raised.
**Question 1: How can your methods be adapted for other scientific fields? Are there specific modifications needed?**
Response: We appreciate this insightful question. Constructing a knowledge graph requires a well-defined ontology and a practical knowledge extraction method. To adapt our approach to other scientific fields, one primarily needs to design the ontology of the graph. This involves defining the types of nodes and the relations between nodes, as well as annotating a dataset. About 50 data based on our training template should suffice to establish a relatively accurate domain-specific knowledge graph.
In the ER process, it is advisable to utilise an embedding model particularly suited to your domain. For material science, where properties and applications are deeply influenced by complex material characteristics, additional enhancements should include integrating specialised embedding models tailored to capture these detailed interactions. These models enhance ER by embedding all entities within the materials science context, thus ensuring more accurate and contextually relevant data processing. Notably, while these enhancements are specialised for materials science, they build upon standard foundational models, employing pre-trained frameworks such as word2vec - mat2vec, BERT - MatBERT. These models not only meet the intricate demands of materials science, but also retain the flexibility to be applied broadly across diverse scientific domains. This ensures that our approach, while adaptable to the unique demands of materials science, remains universally applicable. In other words, when there are no suitable domain-specific models, the base models are also applicable.
**Question 2: What strategies do you have for continuously updating and curating the MKG to ensure its relevance and accuracy?**
Response: Response: We thank the reviewer for their comment. We acknowledge the critical importance of keeping the MSKG up-to-date with the latest advancements in the field. To achieve this, we have implemented several strategies for continuous updates. These include the automated monitoring of new publications using predefined keywords and topics, conducted periodically, along with semi-annual manual reviews by domain experts who validate and refine the graph's content. Additionally, we are harnessing advancements in Large Language Models, particularly the development of intelligent agents. These agents actively participate not only in adding new data but also in critically reviewing and correcting existing entries by identifying discrepancies between the updated graph and previous versions. This dual strategy of addition and correction plays a vital role in enhancing the accuracy and relevance of our knowledge graph.
Additionally, to further optimise the pipeline and reduce costs. We also focus on technological innovations that streamline the process of KG construction. Our approach includes automatically constructing an expert dictionary for entity resolution and leveraging clustering algorithms to identify central nodes and pivotal connections within the graph. This method facilitates a more automated and efficient process. Automating these crucial steps significantly reduces reliance on manual curation while ensuring that our knowledge graph remains robust and comprehensive.
**Supplementary explanation:**
We greatly appreciate the reviewer's insights and comments. As mentioned by the reviewer, further enhancing the currency and accuracy of MKG is an essential goal for our future research. With sufficient currency in the knowledge extraction process, our next focus will be on improving entity resolution (ER) by replacing the machine learning model with Large Language Models (LLMs) to further improve the currency of the method. In addition, due to the existence of LLM, the amount of data required for the training set has been significantly reduced. We further use an active learning strategy to create a new training set through each round of knowledge extraction, further reducing labour costs. | Summary: This paper presents an innovative way on leveraging the power of Large Language Models for the construction of a Material Knowledge Graph (MKG) and link prediction. The method includes annotating few scientific articles (abstracts) related to material science which are used for training and finetuning LLMs. After that step and using additional articles and a finetuned LLM, triples are generated. Entity resolution is performed by using different Natural Language Processing (NLP) techniques such as ChemDataExtractor, mat2vec and an expert dictionary. Finally, the MKG is constructed and used for link prediction with the aid of network-based algorithms and graph embeddings. This approach is compared with another technique called MatKG2 and experiments were conducted for finding the LLM that provides the best results and for evaluating the link prediction of the MKG.
Strengths: • This is an innovative work for the fast and automatic construction of a Material Knowledge Graph with minimal annotation that could have a broad impact in the advancements of material science.
• It reuses effectively new technologies such as Large Language Models and other NLP techniques.
• It does not require too many annotated documents or other manual tasks.
Weaknesses: • More experiments would have supported better this work. The result of this method could have been compared with some baseline experiments of simply using LLMs for the triples generation. Moreover, since this method is compared with MatKG2, it would have been useful to compare the results of precision, recall and F1 score of MatKG2 with the results of MKG.
• While the work seems quite interesting it not well written and several typos and syntactic errors have been found which are detailed below. Proof-reading would have been beneficial before submission.
- Line 64: “ A user-friendly databases.. “ -> “User-friendly databases..”
- Line 80: Acronyms NER and RE are used but they are introduced later in the text (line 85)
- Line 89: “… through query the MKG.” -> “… through querying the MKG.”
- Line 93: “the elaborate workflow” -> “the elaborated workflow”
- Line 98: “NERRE” -> I believe this refers to NER and RE
- Figure 1 (b): ChemDataExactor -> ChemDataExtractor
- Figure 4: FMKG -> the material knowledge graph has been referred in the whole paper MKG. The acronym FMKG is introduced only in this figure.
- Line 233: “indicate”->”indicates", “achieve”->”achieves”, “contribute”->contributes”.
- Line 240: “ChemDataExactor” -> ”ChemDataExtractor”
- Line 252: “The result shows in.. “ -> “The result is shown in…”
• Figure 3 and network-based link prediction is not well explained in the article.
• The code and Knowledge Graph is not available for further evaluation. The authors state that they will only be available if the paper gets accepted.
Technical Quality: 3
Clarity: 2
Questions for Authors: Limitations of this work are not detailed in the paper.
- What would you consider limitations of this work?
- How would you ensure that MKG will be up-to-date with the state of the art in material science?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: There are some concerns about the scalability and maintainability of MKG which are not addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the thorough review and valuable feedback on our manuscript. We will answer all your questions and concerns, clarifying how we have addressed the concerns raised.
**Question 1: What would you consider limitations of this work?**
Response: We would like to thank the reviewer for their comment. The limitation of the work is that the entities and relations in the Entity Resolution task are missing. As noted in the 'Appendix,' we prioritise accuracy in our knowledge graph, and thus, we implement rigorous normalisation processes in ER. This approach may inadvertently lead to the wrongful exclusion of some correct entities. To mitigate this, we are planning for a multi-tier knowledge graph approach. In this approach, entity processing is staged across multiple levels. Initially, entities are processed with broader, less restrictive criteria. As they progress through the system, increasingly stringent criteria are applied at each level to refine and purify the entity data. Additionally, entities that are initially filtered out undergo a secondary review process using LLMs, which helps determine whether to reintegrate them or permanently exclude them. This staged, iterative process aims to balance accuracy with coverage, minimising wrongful exclusions while maintaining the integrity of our knowledge graph.
**Question 2: How would you ensure that MKG will be up-to-date with the state of the art in material science?**
Response: We would like to thank the reviewer for their comment. We agree that this is a crucial aspect of our future work. To ensure that MSKG remains current with the state of the art in materials science, we have implemented several strategies for continuous updates. These include automated monitoring of new publications using predefined keywords and topics, conducted periodically, and semi-annual manual reviews by domain experts to validate and refine the graph content. Additionally, we are incorporating advancements in LLMs, particularly the development of Agents. These agents not only add new data but also critically review and correct existing information by identifying discrepancies between the updated graph and previous graph. This dual approach of addition and correction significantly enhances the accuracy and relevance of our knowledge graph.
**Weakness: More experiments would have supported better this work...**
Response: We have tried to add several naive LLMs into baseline experiments, However, with fine-tuning, LLM performs better, but also cannot perform NER and RE stably according to the expected format. That's why we gave up adding simple LLM. To further elaborate on the reasons, we cited a work that can prove this viewpoint and have made modifications in the article: “We evaluated the performance of each LLM on each task…” → “We evaluated the performance of each LLM across various tasks. Xie et al. *(Tong Xie, Patterns)* have demonstrated by comparing GPT-3.5 with the fine-tuned LLaMA that the latter significantly outperforms the naive models. Even under more lenient manual evaluation conditions for GPT-3.5, a noticeable performance gap persists between it and the fine-tuned LLaMA. This evidence aligns with our findings.” Therefore, we exclude the naive LLMs from the evaluation baseline.
The purpose of comparing with MatKG2 is to highlight the traceability of knowledge in MKG, reflecting the optimisation of the process. Given that the relation in MatKG2 is predicted, we believe a quantitative comparison of the construction methods between MatKG2 and MKG would be unfair. Moreover, the data and code of MatKG2 is not available online. However, we fully acknowledge the reviewer's suggestion. Therefore, we have included the evaluation of MatBERT for knowledge extraction which is core of the MatKG construction, and relevant discussion in the baseline. We think this additional experiment can not only provide a comparison between the MKG and MatKG series but also provide a comparison between the LLM and non-LLM pipelines. The results of this expanded baseline comparison will be detailed in our next revision and we have put the result in global response for your reference.
*Xie, Tong, et al. "Creation of a structured solar cell material dataset and performance prediction using large language models." Patterns 5.5 (2024).*
**Weakness: Typo and code.**
Response: We thank the reviewer for pointing out the typo. We have carefully checked the paper again to ensure that all the typos and syntactic errors have been modified. All the code and data are open-source, and the link will be added once this work is accepted.
**Weakness: Extension for the Fig 3 and network-based link predication.**
Response: We would like to thank the reviewer's comment. We have replaced the original caption for Fig 3 with a new caption: "**Fig 3:** The process of network-based graph completion, illustrating how nodes are categorised into Materials, Properties, and Applications. The diagram on the left shows that both old and new Materials share similar Properties as well as similar Applications. As depicted in the centre diagram, this shared attribute implies a degree of similarity between the old and new Materials - similar characteristics and applications. Consequently, as shown in the right diagram, old Materials can potentially be utilised in new Applications."
**Limitation: There are some concerns about the scalability and maintainability of MKG which are not addressed in the paper.**
Response: Our current work aims to find an outstanding pipeline to construct the KG, so we have relatively simplified some content, such as focusing on the abstract of papers. In future work, the scalability and maintainability of the knowledge graph are important tasks, including but not limited to broadening the ontology, including quantitative data, integrating with existing materials science databases, and dynamically updating the knowledge graph.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and additional comparison results.
My concerns have been addressed and I am raising my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s constructive comments and welcome further input to enhance our paper’s quality in the time ahead. | Summary: The study presents an innovative pipeline for Knowledge Graph (KG) construction, specifically designed for efficient extraction of triples from unstructured scientific texts. The methodology enables fine-tuning of Large Language Models (LLMs) with limited annotated datasets, which is then utilized to extract structured information from extensive corpora of unstructured text. The authors have constructed a Material Knowledge Graph (MKG) that captures relationships between materials and their associated entities, such as properties and applications, derived from abstracts of 150,000 peer-reviewed papers.
Strengths: Originality: The paper introduces a novel pipeline for KG construction that departs from predictive modeling, enhancing the authenticity and traceability of extracted structured data.
Quality: The authors demonstrate the effectiveness and credibility of the MKG through ablation experiments and similarity analyses based on node similarity and graph embedding. The results indicate the substantial predictive capacity of the MKG, with 48.5% of 'material-application' predictions validated within nine years, which is impressive.
Clarity: The paper provides a clear and comprehensive explanation of the methodology, including the fine-tuning of LLMs, extraction of structured information, and construction of the MKG. Detailed results and analyses support the authors' claims.
Significance: The MKG has significant potential in extending the depth of structured information extraction, improving entity labeling precision, and adapting the pipeline to other scientific fields.
Weaknesses: Some figure captions, like "Fig 1(a)" and "Fig 3," are unclear. Captions would benefit from additional context.
Certain acronyms, such as "ER-NF" on line 239 and "NER” / “RE" on line 80, appear before being defined.
Typo on line 140: "task"s->task’s."
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the inclusion of DOIs affect the resource consumption such as memory or performance in any noticeable way?
It is common practice to begin with abstracts. Do the authors intend to extend their work to encompass full texts in the future?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did addressed the limitations, and mentioned that strict normalization and entity resolution process can loss some correct entities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the thorough review and valuable feedback on our manuscript. In this text, we will address each of your comments in detail, clarifying how we have addressed or plan to address the concerns raised.
**Question 1: Does the inclusion of DOIs affect the resource consumption such as memory or performance in any noticeable way?**
Response: We would like to thank the reviewer's comment. The inclusion of DOI nodes indeed increases the total number of nodes within the knowledge graph, which in turn marginally elevates memory usage. However, the increase is manageable and does not stress modern hardware. We have performed stress tests to confirm that the additional memory requirements are well within the capabilities of modern hardware systems (Actually, the entire MKG has only 70 MB when converted into RDF format).
Regarding performance, the knowledge graph is a relational structure database that enables selective interactions between nodes. For example, during operations such as calculating the shortest paths, nodes connected by the 'Sourcefrom' relation can be selectively excluded from computations. This strategic exclusion eliminates any potential adverse impacts on the graph's performance.
Additionally, we have other strategies to mitigate the influence of the increase in nodes, such as storing DOI nodes and other nodes separately. The subgraph of DOI is only queried when users specifically require information related to these DOIs, ensuring efficient data management and system performance.
**Question 2: It is common practice to begin with abstracts. Do the authors intend to extend their work to encompass full texts in the future?**
Response: We would like to thank the reviewer for their comment. We intend to extend our work to encompass full texts, which is an important part of our future work. In addition to broadening the scope of our analysis, we plan to refine our ontology design to include more comprehensive information. This will involve selectively extracting and standardising quantitative data from scientific articles. We also aim to implement a hierarchical approach to manage the complexity and enhance the efficiency of information extraction from full texts. For instance, we could initially categorise the text into sections such as 'Introduction,' 'Methods,' 'Results,' and 'Discussion.' This segmentation allows us to apply specific extraction techniques tailored to the information typically found in each section, streamlining the process and improving accuracy.
**Weakness:**
We thank the reviewer for highlighting the issues with our figure captions and typo errors. We have made the necessary corrections throughout the manuscript. For instance, the original caption for Fig 3 is replaced by: "**Figure 3:** The process of network-based graph completion, illustrating how nodes are categorised into Materials, Properties, and Applications. The diagram on the left shows that old and new Materials share similar Properties and similar Applications. As depicted in the centre diagram, this shared attribute implies a degree of similarity between the old and new Materials - similar characteristics and applications. Consequently, as shown in the right diagram, old Materials can potentially be utilised in new Applications."
---
Rebuttal Comment 1.1:
Comment: The explanations regarding the impact of DOIs on resource consumption and performance are reassuring. I appreciate the corrections you made to figure captions and typo errors. Overall, I am confident that your manuscript is technically solid and has high impact on the field, and I recommend its acceptance for publication.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer’s constructive comments and welcome further input to enhance our paper’s quality in the time ahead. | Summary: The paper on constructing and applying a materials knowledge graph (MKG) in multidisciplinary materials science via a large language model (LLM) is valuable and well-written. However, it primarily focuses on application rather than strong technical contributions, with issues in experimental design, lack of non-LLM baselines for comparison, and insufficient comparison with sophisticated knowledge graph completion methods. It may be a good dataset track paper but not the main track.
Strengths: - The studied problem is of great value in the real world
- The paper is well-written and easy to follow
- The produced MKG could be very helpful for the computational and experimental material science
Weaknesses: - W1: This paper is towards applications of LLMs in an engineering flavor instead of having strong technical contributions. The process includes prompt engineering, basic model fine-tuning, and human-in-the-loop entity resolution. And this paper looks like an extended application of Darwin, instead of an independent research work.
- W2: The experimental design is flawed. In particular, why the normalization is only applied to Darwin? Is it possible that other base models + normalization can perform better? And is the normalization only working well for Darwin?
- W3: Non-LLM baselines for NER and RE should be included for comparison.
- W4: The modified Jaccard similarity method is claimed as a specified KG completion algorithm for material science. Therefore, the experiments should include comparisons with more sophisticated KG completion methods, instead of only comparing with TransE, which is outdated.
- W5: Minor issues include but are not limited to: Typos in Figure 4, what is FMKG?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to W2 and W5
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the thorough review and valuable feedback on our manuscript. Your insights are highly appreciated and have been carefully considered to enhance our work. We also want to emphasise some points in the paper to answer your question.
**W1 Response:**
We would like to thank the reviewer's comment. As reviewers xvtf and is27 have noted, this paper emphasises a novel process for constructing a materials science knowledge graph. The novelty of our work lies in integrating state-of-the-art LLM to perform NERRE simultaneously and for continuous and dynamic updating of the knowledge graph, allowing for real-time integration of new research findings directly into the MKG. The challenges include the potential for LLM-generated hallucinations and biases during the knowledge extraction process, which necessitates sophisticated normalisation and entity resolution processes to maintain the integrity and credibility of the knowledge graph. Additionally, it is crucial to emphasise that this is the first knowledge graph in the materials science domain that achieves multi-level connections for complex material science knowledge, representing a significant advancement over previous models that were limited to merely binary connections. For instance, instead of linking an alloy’s composition directly to its mechanical properties, MKG includes intermediate nodes such as processing techniques and microstructural features. This allows for a deeper exploration of the complex interdependencies that affect an alloy's properties, offering an unprecedented level of detail in modelling material interactions.
Solving STEM problems through LLMs may seem like an engineering flavour, but it is indeed a research direction encouraged by *NeurIPS*, such as the work of Hu et al. [1] and Lu et al. [2]. Compared with the biomedical field, the rarity of KG applications in material science is primarily due to the absence of effective knowledge graphs that can comprehensively capture the intricate data and complex relations inherent in these fields. Different from existing material science knowledge graph construction methods, such as the MatKG series, we utilise LLM for knowledge extraction, ensuring authenticity and traceability of the information. In other words, this work is not an extension of Darwin, although Darwin's performance is the best among several open-source models we compared.
[1] Hu, Xiuyuan, et al. "De novo drug design using reinforcement learning with multiple gpt agents." Advances in Neural Information Processing Systems 36 (2024).
[2] Lu, Pan, et al. "Learn to explain: Multimodal reasoning via thought chains for science question answering." Advances in Neural Information Processing Systems 35 (2022): 2507-2521.
**W2 Response:**
In this study, the application of LLMs is primarily focused on the knowledge extraction part, where we finetuned LLMs on a small set of annotated data to effectively perform entity and relation extraction. The following normalisation process involves cleaning and standardising NERRE result. Therefore, once we identified the LLM with the best performance in knowledge extraction (i.e., Darwin for this task), we concentrated on normalising its output. Theoretically, normalisation is applicable to any LLM that can successfully perform knowledge extraction. However, in our study, we opted for the best-performing model for in-depth analysis based on considerations of efficiency and clarity.
We have also revised the caption to make our ideas more straightforward and avoid misunderstandings: “*Table 1:* Comparative results of NER, RE and ER across different models using fine-tuned LLMs and non-LLMs. The Darwin model, which demonstrated the highest overall performance, was selected to showcase the effects of subsequent normalisation.”
**W3 Response:**
Initially, we focused on leveraging LLM due to their advanced capabilities in handling the complex semantics of scientific texts, which is critical for both NER and RE. Besides, the low labour cost required for fine-tuning is also an indispensable factor in building an automated pipeline. In contrast, identifying the relations by non-LLMs requires a large amount of accurately annotated data for training, which could not be efficient when used for a dynamic knowledge graph that includes state-of-the-art domain knowledge. The absence of non-LLM models in our initial baseline was due to this strategic focus rather than an oversight. In response to the reviewer's insightful feedback, we have incorporated MatBERT and MatKG's construction methods. The results of this expanded baseline comparison will be detailed in our next revision and we have put the result in global response for your reference.
**W4 Response:**
We would like to clarify that the primary focus of our research is leveraging LLMs for KG construction, which addresses issues during the construction process rather than on graph completion itself. The purpose of employing graph completion techniques is to demonstrate the quality, credibility, and effectiveness of the KG we constructed. In other words, using both Jaccard and TransE was not intended to compare their effectiveness; instead, our goal was to explore different methods to enhance the credibility of these steps. Choosing these well-known methods ensures that the study remains approachable and comparable. We opted to modify the basic Jaccard algorithm due to the suboptimal performance of the standard similarity algorithm, as depicted in Fig 6. This modification reflects a common practice in materials science research, where researchers often select trending materials from fields aligned with their research areas as potential candidates for current applications. Our modifications, simulating this process, were minor yet crucial to better suit the needs of the MKG.
**W5 Response:**
We apologise for this typo, and thanks for pointing out this problem. it was revised to “MKG”.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, it solves some of my concerns.
Still,
1. It would be great if the authors could include non-LLM baselines, even on subsets of the data is valuable. It is hard to determine the effectiveness without comparisons to non-LLM methods.
2. It is important to show the proposed normalization can be (easily) applied to and can work well with other models except for Darwin because foundational models are rapidly getting better.
I acknowledge the potential benefits that this paper can bring to the community, while the above-mentioned drawbacks have to be improved.
Therefore I will keep my evaluation of the paper, according to its current state.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued engagement and constructive feedback. We appreciate your insights and have addressed your concerns as follows:
**1. Inclusion of Non-LLM Baselines:** As per your suggestion, we have already included results from MatBERT, which is a SOTA model for material science tasks, in our experiments for KG construction. Please find the results in the global author rebuttal.
**2. Normalization:** According to your suggestion, we have broadened our experimental scope to apply the normalization for LLaMA2. This additional experiment demonstrates the simplicity of applying normalization and its effectiveness on other models. If you deem it necessary, we will include the normalization results for each model in the final version of our manuscript and add relevant discussion. As we have mentioned, implementing this process is not challenging.
| Model | Task | Precision | Recall | F1 score |
|-------|------|-----------|--------|----------|
| | NER | 0.9331 | 0.9145 | 0.9237 |
| LLaMA2 (Normalization) | RE | 0.8517 | 0.8893 | 0.8701 |
| | ER | 0.9164 | 0.8902 | 0.9031 |
We hope these additions and clarifications address your concerns adequately. We believe that these improvements significantly enhance the contribution of our work to the community and appreciate your suggestions that led to these refinements. | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation to all the reviewers for your valuable feedback on our work, and we have responded to all your questions (in the corresponding rebuttal sections). We also add some supplementary experimental results in the *pdf* file, mainly to reproduce the KG construction method used by MatKG. Since this method is centered around MatBERT, we think this additional experiment can not only providea comparison between the MKG and MatKG series but also provide a comparison between the LLM and non-LLM pipelines.
Pdf: /pdf/3412428e1f2a52ded7c95eae8fcbd7089aafff12.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Metric from Human: Zero-shot Monocular Metric Depth Estimation via Test-time Adaptation | Accept (poster) | Summary: This work proposes a test time training technique to turn a monocular relative depth estimation model into a metric monocular one. The core insight of the work is to rely on the prior of a text-to-image model (Stable Diffusion v2) to generate humans in the scene for which we are interested in knowing metric depth. Given the prior of the model, the humans should be generated in a scale aware manner and can therefore be used as hints to recover the metric scale of the scene. Using off-the-shelf methods (HMR-2) it is then possible to fit a SMPL model to the images of the humans and recover estimated metric depth for the humans. This information is finally used to train a simple linear layer that will transform a relative depth predicted by an independent model into a metric one for that specific scene. This whole process is repeated for each test image (therefore is very very slow) but does not require any metric training data and achieve decent results.
Strengths: + Original idea. While it is quite know in the literature that objects provides clues to metric monocular depth models to recover the scale of a scene (and for this reason they can be easily fooled by optical illusions) I haven’t seen before a proposal to use generative models to paint these objects (e.g., humans) when they do not occur naturally in the scene.
+ Clear motivation. I think Fig. 2 and Fig. 3 together with the introduction do a good job in explaining why the authors think that this problem is relevant and what are limitations of the existing solutions proposed in the literature.
Weaknesses: 1. Practicality. As reported in Fig. 5 the proposed pipeline requires up to 5 minutes per image, to ground the discussion a forward pass of the biggest depth anything model requires tens of milliseconds. Besides the latency, this work requires relying on three different models (a relative depth model, an image inpainting model and a human mesh prediction model) to achieve metric depth prediction on a single image. I’m not convinced that these settings are realistic for any practical use case. I would suggest to the authors to explore the same idea but in an offline settings, i.e., as a way of generating pseudo metric ground truth for a dataset that does not have it.
2. Inherently limited by the support models. The proposal basically shifts the problem of recovering the scale of the scene from the depth estimation model to the image generation and human mesh estimation models. In particular the inpainting model is the one that does the heavy lifting in this work since it is tasked, provided with an arbitrary mask, to generate plausible humans. If any of the two models fail in their task the pipeline has no way of recovering as highlighted also by the authors in the failure case section of the work. If this intuition is correct, an approach that, like Marigold, starts from a pre-trained Stable diffusion and fine-tunes it for metric depth seems to be more promising with respect to the proposed solution. This solution was not explored by the authors since they didn’t want to rely on any metric depth data.
3. Works only under certain assumptions. Besides the assumption of having good behaving support model discussed in weakness (2), the proposed solution also assumes that:
1. The definition of the area of the image to inpaint (which is a heuristic in the current implementation) picks often an area big enough and where a human can be generated. This in turns completely excludes entire categories of scenes (e.g. close ups or extremely wide views)
2. The depth estimation models are not affected by the artifacts introduced in the scene by the inpainting methods (e.g., Fig. 6 column 5 and 6 where the inpainted images do not make much sense from a semantic point of view)
Technical Quality: 3
Clarity: 3
Questions for Authors: a. What technique is it used for inpainting the image? (this should be specified in the paper)
b. Is Eq. 5 optimized only over the pixels with humans?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discuss limitations of the current method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort to review our paper. Below please find our specific answers to the questions.
1. **Practicality.**
We acknowledge that MfH is not currently efficient. The runtime shown in Figure 5 is based on a sequential generate-and-estimate process, where painted images are processed one after another, illustrating a linear correlation between computational cost and the number of painted images. In Table A, we provide a breakdown of runtime with 32 images to paint when paralleled, revealing that the majority of the time is consumed by generative painting with diffusion models. Given the rapid advancements in diffusion sampling [R1, R2], we anticipate further improvements in MfH’s inference speed in the near future.
| HMR | Generative Painting | MRDE | Optimization | Total |
| --- | --- | --- | --- | --- |
| 2.4s | 5.5s | 0.1s | 0.3s | 8.3s |
Table A. Runtime breakdown for an input image.
[R1] Consistency Models, ICML 2023
[R2] One-step Diffusion with Distribution Matching Distillation, CVPR 2024
2. **Offline settings.**
We appreciate your suggestion to explore offline settings, which is a direction we find promising. Both solutions have their advantages. The offline approach requires training with abundant in-the-wild images and pseudo ground truths. While being expensive to train and prone to scene dependency, it offers fast inference. In contrast, our MfH provides a more cost-effective solution that is training-free and directly benefits from advancements in support models.
3. **Inherently limited by the support models.**
We agree that the performance of MfH is related to the support models. However, our experimental results demonstrate that MfH, using current HMR and generative painting models, can predict metric depths satisfactorily. We further conduct ablation study in Tables B and C to show the impacts of different generative painting models and HMR models. They further indicates the potential for improved MMDE results with more advanced support models.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| SD v1.5 | 74.0 | 16.8 | 11.5 | 0.642 |
| SD-XL | 78.5 | 15.9 | 11.3 | 0.533 |
| SD v2 | 83.2 | 13.7 | 9.78 | 0.487 |
Table B. Ablation study for different generative painting models on NYUv2.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| HMAR [R3] | 82.0 | 14.2 | 9.83 | 0.489 |
| TokenHMR [R4] | 80.4 | 14.9 | 9.55 | 0.495 |
| HMR 2.0 [20] | 83.2 | 13.7 | 9.78 | 0.487 |
Table C. Ablation study for different HMR models on NYUv2.
[R3] Tracking People by Predicting 3D Appearance, Location & Pose, CVPR 2022
[R4] TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation, CVPR 2024
4. **Will failure cases lead to failure result?**
Our random generate-and-estimate process provides tolerance for failures in each component. Since the generative painting model can paint plausible humans in most cases, a few failures will not significantly impact the overall result. A sufficiently large number of painted images dilutes the influence of these failures, providing reasonable predictions. However, the idea of fine-tuning pre-trained Stable Diffusion for metric depth estimation is interesting, especially without metric depth annotations. We are excited to explore this dedicated problem in the future.
5. **Works only under certain assumptions.**
Thank you for pointing out the potential assumptions of our method. We address each concern below:
1. To assess whether MfH performs well with close-up or wide-view shots, we examine the MMDE results on the ETH3D dataset, which includes both indoor and outdoor scenes with various shot types. We annotate ETH3D images by differentiating their shot distances and plot the AbsRel comparisons in Figure R1 of the attached PDF. The results indicate no significant performance degradation when handling close-up or wide-view inputs, demonstrating that MfH can effectively handle general close-up and wide-view shots. Similar conclusion can also be drawn from the in-the-wild qualitative results in Figure R2. However, as we acknowledged in Section 5, MfH may not perform well in extreme cases where humans cannot be present in the scene.
2. Since MfH only aligns the MRDE of unpainted areas ($\mathbf{D}^\text{rel}$ vs. $\{\hat{\mathbf{D}}^\text{rel}_n\}$), and MMDE of human areas ($\mathbf{D}^*_n$ vs. $\{\hat{\mathbf{D}}^\text{m}_n\}$), the potential non-human artifacts are not taken into account and will not affect the optimization process. Although artifacts can be semantic meaningless, we do not observe them significantly impacting the aligned depths of other contents, as evidenced by the point clouds in Figure 6 column 5 and 6.
6. **Image inpainting technique.**
We adopt Stable Diffusion v2 for generative painting, which is based on Conditional Latent Diffusion [56]. Starting from a text-conditioned diffusion model checkpoint, the denoising UNet is finetuned with additional input channels for VAE-encoded masked image to result in a diffusion-based generative painting model. We will clarify this in our revised manuscript.
7. **Optimization range of Eq. 5.**
Your understanding is correct. Eq. 5 is optimized over the pixels of humans.
---
Rebuttal Comment 1.1:
Title: Checked
Comment: Thanks for providing a rebuttal to my criticisms.
I have carefully checked it and will discuss together with the other reviewers wheter to change my rating.
I tend to agree with Reviewer 4QJ3 that the method should be directly compared with metric depth estimation methods that provide significant advantages with respect to the proposal.
I think the intuition behind this paper is interesting and the proposed solution original but ultimatelly there could be better ways of using these to obtain more practical models that can be executed efficiently (as pointed out in weakness 1).
Regarding answer 5.2 my point was mroe on the fact that inpainting results will change the appearence of the scene which in turn will change the predicted monocular depth. While the MMDE loss will be optimized only on the "human" area of the image this will be in turn being modified by the global appearence of the scene. Prediction are not completelly independent per pixel. I can see how empirically this does not matter but this is another possible source of error from the proposed pipeline.
---
Reply to Comment 1.1.1:
Title: Weakness 2: More practical models that can be executed efficiently.
Comment: We acknowledge that more efficient models could potentially be developed based on our approach. However, we would like to emphasize the following points:
1. Our primary contribution is identifying the issue of scene dependency and addressing it using scene-independent metric scale priors, which have shown promising results. While there may be more efficient or effective solutions, our goal with MfH is not to present a perfect solution but to highlight a potential direction for future research.
2. Test-time adaptation differs from the traditional training-inference paradigm, where the model remains unchanged during test time. While distilling knowledge from support models might enhance efficiency during test time, it could still retain scene dependency from the training phase. In contrast, MfH adapts the MMDE model based on each test sample during optimization, reducing scene dependency and allowing for a more tailored focus on each individual test case.
We appreciate your thoughtful comments, and will conduct more follow-up study based on the intuition of MfH, to make it more practical and useful in real-world applications.
---
Reply to Comment 1.1.2:
Title: Answer 5.2: Inpainting results will change the appearance of the scene.
Comment: Thank you for illustrating your point on the influence of inpainting on final results. We agree that inpainting can modify some contents of original inputs, potentially introducing noises, as you mentioned “prediction are not completely independent per pixel”. Under the framework of MfH, since we have human masks, we can use these masks to crop out human bodies and put them in the original input image. This will exclude the effect from semantic meaningless pixels.
---
Rebuttal 2:
Title: Additional Comparison with Recent Advanced MMDE Methods
Comment: Thank you for your valuable comments. We agree that including direct comparisons with recent advanced metric depth prediction methods would be beneficial. Since reviewer 4QJ3 emphasizes the importance of robustness on in-the-wild inputs, we also provide this comparison here as a reference.
For in-the-wild inputs, where ground truths are unavailable, we further conduct a user study. This study includes all images shown in Figure R2 of the rebuttal PDF with MMDE results from UniDepth-C, UniDepth-V, ZoeDepth-NK, ZeroDepth, Metric3D-v1, Metric3D-v2, DepthAnything-N, DepthAnything-K, and our proposed MfH. Participants are presented with input images and corresponding MMDE results from all methods, along with a color bar mapping depth values to colors. They are then asked to select the most reasonable MMDE result for each input sample.
To analyze the results, we take each input image as a separate sample, and add one count to the corresponding method if its MMDE result is selected as the most reasonable MMDE given the corresponding input image and the meter bar. We then calculate the selection rate for each method, representing the proportion of selected results for this method out of the total number of selections. So far, we have received 45 responses with the overall results in Table D. Further, we break down the results according to the maximum value of the meter bar as in Tables E-G.
These results indicate that our MfH method achieves the highest selection rate across all depth ranges, demonstrating its robustness. Metric3D-v2 also performs well, securing the second-highest selection rate. In contrast, other methods shows variability in performance across different depth ranges. For example, DepthAnything-N has a high selection rate for short-range inputs but is not selected in inputs with larger maximum depths. This is probably due to its scene dependency. Since it is trained on NYUv2, an indoor scene dataset, its MMDE ability focus more on short-range scenes. In our revised manuscript, we will include all MMDE results (also as qualitative comparisons), these quantitative results, and discussions. We will also keep updating the results with more responses received.
We hope this user study, along with Tables 1-2 in the main paper, and Table R1 and Figure R2 in the rebuttal PDF, offers a more comprehensive comparison between our MfH and recent advanced metric depth prediction methods.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 12.6% | 6.3% | 3.6% | 18.2% | 6.1% | 5.4% | 0.8% | 4.3% | 42.6% |
Table D. Overall selection rate as the most reasonable MMDE result.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 4.0% | 13.8% | 0.0% | 18.2% | 5.3% | 12.0% | 1.8% | 3.6% | 41.3% |
Table E. Selection rate as the most reasonable MMDE result for short-range (10m-15m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 17.8% | 3.2% | 1.9% | 16.2% | 6.7% | 2.5% | 0.3% | 5.4% | 46.0% |
Table F. Selection rate as the most reasonable MMDE result for medium-range (20m-40m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 14.4% | 2.2% | 11.1% | 21.7% | 6.1% | 2.2% | 0.6% | 3.3% | 38.3% |
Table G. Selection rate as the most reasonable MMDE result for long-range (80m at max) inputs. | Summary: This paper introduces Metric-from-Human (MfH), a method to infer metric depths from images without needing metric depth annotations. Using humans as landmarks, MfH extracts scene-independent metric scale priors from generative painting models, overcoming the challenge of scene dependency in Monocular Metric Depth Estimation (MMDE). They propose a test-time adaptation framework that bridges Monocular Relative Depth Estimation (MRDE) to MMDE via a generate-and-estimate pipeline. Experiments show MfH's superior performance and generalization ability in zero-shot MMDE. The paper also addresses limitations, broader impacts, ethical considerations, and provides experimental settings and statistical significance of the results.
Strengths: The paper provides a thorough analysis of the advantages and disadvantages of recently researched MMDE and MRDE models, highlighting their differences. It introduces a novel method for obtaining metric-depth in a scalable manner.
The innovative use of generative painting and Human Mesh Recovery (HMR) techniques to leverage the strong prior of human figures is a significant advantage, which has potentials to reduce the reliance on expensive metric-depth annotations.
From an architectural perspective, the protocol for incorporating metrics into MRDE appears reasonable. Additionally, the paper reasonably discusses the limitations and broader impacts of the research.
Weaknesses: The experimental results presented seem to be quite poor. For example, [1] also achieves zero-shot performance on NYU or KITTI datasets, showing significantly better results than this paper. Of course, the proposed method is focused on converting MRDE to metric depth without using any metric annotations, which is a disadvantage. Therefore, I am curious whether "Metric from Human" method would also benefit MMDE models like [1].
Similarly, the metric performance seems less than ideal, potentially due to the complexity of the scenes or inaccuracies in the human prior. Therefore, a detailed and fine-grained analysis is needed to determine the accuracy of the metric information provided by humans.
##### [1] UniDepth: Universal Monocular Metric Depth Estimation
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions are included in the weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As mentioned in the weaknesses, the overall metric performance is suboptimal. The method appears to be highly dependent on the performance of human recovery models and generative painting models. It is likely that in certain difficult scenes, the performance will not be maintained effectively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort to review our paper. Below please find our specific answers to the questions.
1. **Experimental results.**
We acknowledge that currently our zero-shot MfH does not always outperform state-of-the-art many-shot methods. However, we would like to highlight our main contribution as pointing out the scene dependency problem of fully supervised many-shot MMDE and offering a potential solution. Our comparisons demonstrate MfH’s strong zero-shot MMDE performance across diverse scenarios, while fully supervised many-shot methods may degrade on unseen scenes. Additionally, we view MfH as a general framework for zero-shot MMDE using test-time adaptation, with performance that can be further enhanced with improved MRDE, HMR, and generative painting models.
2. **MfH can benefit MMDE models like UniDepth.**
We replace the current MRDE model with UniDepth in MfH and present the MMDE results in Tables A, B, and C below. The results show that MfH with UniDepth achieves better results on iBims-1 and ETH3D, but worse on DIODE (Indoor), than original UniDepth. This confirms strong metric scale priors extracted by MfH can enhance MMDE models on unseen scenes. The degradation on DIODE (Indoor) is because it contains extreme close-up scenes, where painting humans can meet difficulties, as we acknowledge in Section 5. For such scenarios, we see potential in incorporating objects other than humans into the generate-and-estimate pipeline as metric landmarks.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| UniDepth-V | 79.8 | 18.1 | 10.4 | 0.760 |
| MfH (Ours) w/ Depth Anything | 42.2 | 34.5 | 13.2 | 1.363 |
| MfH (Ours) w/ UniDepth-V | 43.5 | 32.6 | 11.9 | 1.390 |
Table A. Performance comparisons of our MfH with different MRDE methods and UniDepth on DIODE (Indoor).
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| UniDepth-V | 23.4 | 35.7 | 6.87 | 1.063 |
| MfH (Ours) w/ Depth Anything | 67.7 | 23.3 | 9.73 | 0.738 |
| MfH (Ours) w/ UniDepth-V | 69.8 | 20.0 | 8.99 | 0.664 |
Table B. Performance comparisons of our MfH with different MRDE methods and UniDepth on iBims-1.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| UniDepth-V | 27.2 | 43.1 | 8.93 | 1.950 |
| MfH (Ours) w/ Depth Anything | 47.1 | 24.0 | 8.16 | 1.366 |
| MfH (Ours) w/ UniDepth-V | 51.2 | 25.5 | 9.17 | 1.489 |
Table C. Performance comparisons of our MfH with different MRDE methods and UniDepth on ETH3D.
3. **Detailed analysis of human scale priors with respect to scenes.**
To analyze the contribution of metric information from humans, we look into the MMDE results on ETH3D, which includes both indoor and outdoor scenes with diverse type of shots. Specifically, we annotate ETH3D images with two shot-related attributes and plot the AbsRel comparisons in Figure R1 of the attached PDF. They confirm that MfH can robustly recover metric depths, as it consistently achieves low errors across various type of shots. We also identify that the metric information from humans helps the most for level-angle inputs. This is likely because MRDE models tend to interpret similar semantics, such as different parts of a human body, as having similar depths. This interpretation aligns well with standing humans, which are typically generated in level-angle images. Moreover, we do not observe significant degradation with varying the distance of shots. This indicates MfH can effectively handle general close-up and wide-view shots. We will include these analysis in our revised manuscript.
4. **Dependency on human recovery models and generative painting models.**
We acknowledge MfH depends on HMR and generative painting models. To assess their impacts, we conduct ablation study in Tables D and E. Table D demonstrates that MfH combined with stronger generative painting models yields better performance, likely due to its superior ability to generate realistic paintings. It is possible that if a generation model can better capture the real-world 2D image distributions, it has a better sense of scale, serving as a more effective source of metric scale priors. Table E indicates that different HMR models do not significantly affect MfH’s performance, probably because current HMR models can robustly assist in extracting metric information for MMDE within our MfH framework. Overall, we anticipate better MMDE results with more advanced support models, which can be integrated in a plug-and-play manner.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| SD v1.5 | 74.0 | 16.8 | 11.5 | 0.642 |
| SD-XL | 78.5 | 15.9 | 11.3 | 0.533 |
| SD v2 | 83.2 | 13.7 | 9.78 | 0.487 |
Table D. Ablation study for different generative painting models on NYUv2.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| HMAR [R1] | 82.0 | 14.2 | 9.83 | 0.489 |
| TokenHMR [R2] | 80.4 | 14.9 | 9.55 | 0.495 |
| HMR 2.0 [20] | 83.2 | 13.7 | 9.78 | 0.487 |
Table E. Ablation study for different HMR models on NYUv2.
[R1] Tracking People by Predicting 3D Appearance, Location & Pose, CVPR 2022
[R2] TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation, CVPR 2024
---
Rebuttal Comment 1.1:
Comment: Most of my questions have been resolved, and I sincerely appreciate the thorough responses. However, I agree with the other reviewers (4QJ3 and xSFg) that including direct comparisons with recent advanced metric depth prediction methods would be beneficial. This approach could enrich your work and provide valuable insights for future research. Overall, considering the originality of the work, I will maintain my current score.
---
Rebuttal 2:
Title: Additional Comparison with Recent Advanced MMDE Methods
Comment: Thank you for your valuable comments and for acknowledging that most of your questions have been resolved. We agree that including direct comparisons with recent advanced metric depth prediction methods would be beneficial. Since reviewer 4QJ3 emphasizes the importance of robustness on in-the-wild inputs, we also provide this comparison here as a reference.
For in-the-wild inputs, where ground truths are unavailable, we further conduct a user study. This study includes all images shown in Figure R2 of the rebuttal PDF with MMDE results from UniDepth-C, UniDepth-V, ZoeDepth-NK, ZeroDepth, Metric3D-v1, Metric3D-v2, DepthAnything-N, DepthAnything-K, and our proposed MfH. Participants are presented with input images and corresponding MMDE results from all methods, along with a color bar mapping depth values to colors. They are then asked to select the most reasonable MMDE result for each input sample.
To analyze the results, we take each input image as a separate sample, and add one count to the corresponding method if its MMDE result is selected as the most reasonable MMDE given the corresponding input image and the meter bar. We then calculate the selection rate for each method, representing the proportion of selected results for this method out of the total number of selections. So far, we have received 45 responses with the overall results in Table F. Further, we break down the results according to the maximum value of the meter bar as in Tables G-I.
These results indicate that our MfH method achieves the highest selection rate across all depth ranges, demonstrating its robustness. Metric3D-v2 also performs well, securing the second-highest selection rate. In contrast, other methods shows variability in performance across different depth ranges. For example, DepthAnything-N has a high selection rate for short-range inputs but is not selected in inputs with larger maximum depths. This is probably due to its scene dependency. Since it is trained on NYUv2, an indoor scene dataset, its MMDE ability focus more on short-range scenes. In our revised manuscript, we will include all MMDE results (also as qualitative comparisons), these quantitative results, and discussions. We will also keep updating the results with more responses received.
We hope this user study, along with Tables 1-2 in the main paper, and Table R1 and Figure R2 in the rebuttal PDF, offers a more comprehensive comparison between our MfH and recent advanced metric depth prediction methods.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 12.6% | 6.3% | 3.6% | 18.2% | 6.1% | 5.4% | 0.8% | 4.3% | 42.6% |
Table F. Overall selection rate as the most reasonable MMDE result.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 4.0% | 13.8% | 0.0% | 18.2% | 5.3% | 12.0% | 1.8% | 3.6% | 41.3% |
Table G. Selection rate as the most reasonable MMDE result for short-range (10m-15m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 17.8% | 3.2% | 1.9% | 16.2% | 6.7% | 2.5% | 0.3% | 5.4% | 46.0% |
Table H. Selection rate as the most reasonable MMDE result for medium-range (20m-40m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 14.4% | 2.2% | 11.1% | 21.7% | 6.1% | 2.2% | 0.6% | 3.3% | 38.3% |
Table I. Selection rate as the most reasonable MMDE result for long-range (80m at max) inputs. | Summary: This paper enables monocular depth estimation to output metric-scale depth maps from only single images. To do this, the key idea of this paper is to leverage painting human 3D models into the input images in test-time adaptation, whose motivation is that human painter depicts subjects in consideration of scene configurations. With the painted human, the authors use it as landmarks to account for scenes in metric scale. After making an initial metric scale scene depths using the painted human, the proposed framework propagates the metric scale cues onto whole scenes using optimizations. The proposed method shows impressive performance on the zero/few-shot depth prediction over state-of-the-art methods.
Strengths: The most strength point of this work is to propose a new paradigm for monocular depth estimation with metric scale. In real-world scenarios, the roughly estimated metric scale depth is enough unless precise accuracy is required. Since this work is not targeting depth estimation with precise accuracy, there is no technical issue.
In addition, the idea using both human painting and its observation is very interesting. In monocular depth estimation, to obtain metric scale, finding reasonable metric scale cue in a scene is important and not easy. This work successfully utilizes it and shows the robust performance over relevant works.
Lastly, this paper is well-written and easy to understand for readers. I hope the authors to release their source codes and pre-trained weights for the same field researchers.
Weaknesses: I do not have any weakness of this paper. Please check the Questions because I have some requests for better technical descriptions.
Merely, the authors fix the references. For example, do not refer paper as Arxiv like [41] which is published in CVPR. Plus, please check the details of reference papers like [60] (0 to 0 pages).
Technical Quality: 3
Clarity: 4
Questions for Authors: I have two requests as below:
1. Can you show me several results on images taken by the authors using DSLR and smartphone. To demonstrate the generality of this work, the results will be very helpful.
2. Following the first request, it would be interesting for the authors to do experiments on a case with an existence of radial distortions in images. Nowadays, commercial cameras, especially smartphone cameras, is not based on pinhole camera models. That means the output from Metric Head is sometimes inaccurate according to the camera used. In this consideration, the authors need to discuss the practical issue based on real-world experiments.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This paper has well described their limitations. During reading this paper, I though that these two limitations, and knew that the authors think about them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort to review our paper. Below please find our specific answers to the questions.
1. **Results from DSLR and smartphone.**
We demonstrate qualitative results for DSLR and smartphone captured images in Figure R2 of the attached PDF. Depth predictions are truncated at different maximum values and displayed in color maps. These results show that our MfH can handle in-the-wild MMDE well, even for inputs with distortions, e.g., the first and last rows of the right column. Also, we observe fully supervised MMDE methods like UniDepth often provide bounded metric depths, inheriting from the limited range of sensors used in their training ground truths. In contrast, our MfH can provide more flexible results.
2. **Practical issue when the camera deviates from pinhole.**
We do observe distortions in some in-the-wild images, where using a linear metric head might not be ideal since some pixels might appear closer than the others. For general slight distortions, we expect the MRDE model, as one component of MfH, to handle these, as it leverages abundant training data which might contain distortions. To further compensate for stronger distortions, we can use a radius-related metric head to transform relative depths to metric depths, formally, the scale $s = s(r)$ and the translation $t = t(r)$. Then we believe our MfH can still work since our generate-and-estimate strategy with random painting allows local optimization of scales and translations.
3. **Reference errors.**
We appreciate your suggestion and have checked and fixed the reference errors accordingly in our revised manuscript.
4. **Source code and pre-trained weights.**
We appreciate your suggestion and will release our source code and pre-trained weights of support models upon acceptance.
---
Rebuttal 2:
Title: Checked
Comment: Thanks authors.
I have checked your rebuttal, and do not have questions anymore.
Best,
Reviewer bPK1.
---
Rebuttal Comment 2.1:
Comment: Thank you for your valuable comments. We will incorporate these extra results in our revised manuscript. | Summary: This paper presents target zero-shot monocular metric depth estimation in the wild. They propose to use humans as landmarks to achieve metric scale and without any other information, such as focal length used in metric3D and zerodepth. The key ideas are creative and well-motivated, addressing an important challenge in the field. While there are some limitations and areas for further exploration, the method shows clear improvements over existing approaches and opens up interesting directions for future work. However, it lacks more in-the-wild evaluation. From the comparisons, the metric accuracy is not convincing. I cannot know if the model can recover metrics in the wild.
Strengths: Novelty: The paper presents an innovative approach to zero-shot monocular metric depth estimation by leveraging generative painting models and human mesh recovery. This is a creative solution to the challenge of generalizing metric depth estimation to unseen scenes.
Problem Formulation: The authors clearly articulate the limitations of current MMDE approaches, particularly their scene dependency and data hunger. The motivation for their method is well-explained and supported by empirical evidence (Fig. 2 and 3).
Method: The proposed Metric from Human (MfH) framework is well-designed and clearly explained. The use of humans as metric landmarks and the generate-and-estimate pipeline are interesting ideas.
Potential Impact: If successful, this approach could significantly advance the field of monocular metric depth estimation, enabling better generalization to unseen scenes without requiring large amounts of metric-annotated training data.
Weaknesses: Lack of Detailed Results: The paper does not present any quantitative results or comparisons with existing methods in Table 1, such as Metric3D or UniDepth. This makes it difficult to assess the actual performance and advantages of the proposed approach.
Limited Discussion of Limitations: While the method is promising, there's little discussion of its potential limitations or failure cases. The method relies heavily on the performance of the generative painting and human mesh recovery models, which could introduce errors or biases. For instance, how does it perform when the generative painting model produces unrealistic or poorly scaled humans?
Computational Complexity: Given that it involves generative painting and human mesh recovery at test time, it may be significantly slower than existing approaches. The computational cost and inference time of the test-time adaptation process are not addressed
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. You did not include comparisons with the state-of-the-art methods such as Metric3D and UniDepth in Table 1. These works include test results on the NYUv2 and KITTI datasets.
2. You should include comparative experiments with other generative painting and human mesh recovery methods because the method relies heavily on the performance of the generative painting and human mesh recovery models, which could introduce errors or biases.
3. Have you encountered cases where SD v2 and HMR2.0 failed to generate satisfactory human body and human mesh recovery? How did you handle situations where they couldn't produce good results?
4. Provide more details on the generative painting model and human mesh recovery model used and how it affects the results.
5. I noticed that in Figure 6, the human body in the Recovered Human Meshes and Painted Images point clouds do not overlap well. Please explain this phenomenon.
6. More discussion and analysis of failure cases of the approach.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper lacks enough discussion of limitations. More details are provided in Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort to review our paper. Below please find our specific answers to the questions.
1. **Comparisons with state-of-the-art methods.**
We will update Table 1 in our revised manuscript as Table R1 in the attached PDF. Our previous Table 1 aims to provide a fair comparison based on the availability of metric depth annotations (the number of shots). So we focus on zero/one/few-shot methods, excluding many-shot methods like Metric3D and UniDepth.
2. **Comparative experiments with different generative painting methods.**
In Table A, we ablate the effect of using different generative painting models in MfH. The results indicate that current generative painting models generally work well with MfH in MMDE. MfH combined with SD v2 produces the best outcomes, likely due to its superior ability to generate realistic paintings. It is possible that if a generation model can better capture the real-world 2D image distributions, it has a better sense of scale, serving as a more effective source of metric scale priors. Hence, we anticipate further performance gain of MfH with more advanced generative painting models.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| SD v1.5 | 74.0 | 16.8 | 11.5 | 0.642 |
| SD-XL | 78.5 | 15.9 | 11.3 | 0.533 |
| SD v2 | 83.2 | 13.7 | 9.78 | 0.487 |
Table A. Comparative experiments with different generative painting models on NYUv2.
3. **Comparative experiments with different HMR methods.**
In Table B, we examine the effect of different HMR models in MfH. We find that our approach is not sensitive to such changes. This suggests that humans can serve as relatively universal landmarks for deriving metric scales from images. Also, current HMR models can robustly help extract metric scales for MMDE with our MfH framework.
| Model | $\delta_1$ $\uparrow$ | AbsRel $\downarrow$ | SI$_{\log}$ $\downarrow$ | RMSE $\downarrow$ |
| --- | --- | --- | --- | --- |
| HMAR [R1] | 82.0 | 14.2 | 9.83 | 0.489 |
| TokenHMR [R2] | 80.4 | 14.9 | 9.55 | 0.495 |
| HMR 2.0 [20] | 83.2 | 13.7 | 9.78 | 0.487 |
Table B. Comparative experiments with different HMR models on NYUv2.
[R1] Tracking People by Predicting 3D Appearance, Location & Pose, CVPR 2022
[R2] TokenHMR: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery
4. **Discussion and analysis of failure cases.**
We show three typical failure cases in Figure 8 of our appendix in the main PDF. They include 1) the generative painting model producing non-human objects with human-like appearances, 2) the generative painting model incorrectly capturing the scene scale and producing out-of-proportion humans, and 3) the HMR model predicting meshes that penetrate each other. Since the generative painting model can paint plausible humans in most cases, a few failures will not significantly impact the overall result. Our random generate-and-estimate process with sufficient painted images makes MfH robust to outliers. Further, we speculate prompt engineering, as well as better sampling and filtering strategies in human painting, can improve the performance of MfH.
5. **Discussion of limitations.**
We discuss the limitations of MfH in Section 5, pointing out two main assumptions MfH based on:
1. We assume humans can exist in the scene so that generative painting is possible to paint humans on the input image. While this holds for most usages of MMDE, it might not be ideal for some cases, e.g., close-up scenes. To this end, one future direction is incorporating objects other than humans into the generate-and-estimate pipeline as metric landmarks.
2. We assume the MRDE predictions align with true metric depths up to affine. Since the MRDE predictions can contain non-linear noises, a simple linear metric head as in MfH might not be optimal. Exploring alternative parameterizations of the metric head remains an open question.
6. **Computational complexity.**
We acknowledge that MfH is not currently efficient. The runtime shown in Figure 5 is based on a sequential generate-and-estimate process, where painted images are processed one after another. In Table C, we provide a breakdown of runtime with 32 images to paint when paralleled, revealing that the majority of the time is consumed by generative painting with diffusion models. Given the rapid advancements in diffusion sampling [R3, R4], we anticipate further improvements in MfH’s inference speed in the near future.
| HMR | Generative Painting | MRDE | Optimization | Total |
| --- | --- | --- | --- | --- |
| 2.4s | 5.5s | 0.1s | 0.3s | 8.3s |
Table C. Runtime breakdown for an input image.
[R3] Consistency Models, ICML 2023
[R4] One-step Diffusion with Distribution Matching Distillation, CVPR 2024
7. **Human bodies not completely overlapping with human point clouds.**
Since we only optimize a scale and a translation to convert MRDE to MMDE, each point cloud is stretched with the same set of scale and translation. Hence, we do not expect all human bodies to overlap perfectly with their corresponding point clouds but try to seek a “mode” of MRDE-to-MMDE transformation. This simplistic parameterization also provides a regularization against outliers during the generate-and-estimate process.
8. **In-the-wild evaluation.**
We present qualitative results for DSLR and smartphone captured images in Figure R2 of the attached PDF. These results demonstrate our MfH can effectively handle in-the-wild inputs. Also, we observe fully supervised MMDE methods like UniDepth often provide bounded metric depths, inheriting from the limited range of sensors used in their training ground truths. In contrast, our MfH can provide more flexible results.
---
Rebuttal Comment 1.1:
Title: comments
Comment: Thanks for authors's detailed reply. Most of my concerns have been solved.
I still have a suggestion. As this paper aims to recover the metric depth, although with human body prior, it should compare with all recent advanced metric depth prediction methods, including unidepth, zoedepth, zerodepth, metric3d, metric3dv2, depthanything. Altough these methods may use different priors, the problem is the same. Inclusive comparisons on in-the-wild cases together can provide insight for followers. As the training data varies, the quantitative comparison is not that important. Robustness is the core problem.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for acknowledging that most of your concerns have been addressed.
We agree that in-the-wild comparisons can offer significant insights for our readers. While the 1-page limit of the attached PDF (with no anonymous link allowed) only permitted us to present partial results in Figure R2, where we demonstrate the robustness of our approach, we will ensure to include more comprehensive in-the-wild comparisons with all recent advanced metric depth prediction methods as you mentioned in the revised manuscript.
---
Rebuttal 2:
Title: Additional Comparison with Recent Advanced MMDE Methods
Comment: For in-the-wild inputs, where ground truths are unavailable, we further conduct a user study. This study includes all images shown in Figure R2 of the rebuttal PDF with MMDE results from UniDepth-C, UniDepth-V, ZoeDepth-NK, ZeroDepth, Metric3D-v1, Metric3D-v2, DepthAnything-N, DepthAnything-K, and our proposed MfH. Participants are presented with input images and corresponding MMDE results from all methods, along with a color bar mapping depth values to colors. They are then asked to select the most reasonable MMDE result for each input sample.
To analyze the results, we take each input image as a separate sample, and add one count to the corresponding method if its MMDE result is selected as the most reasonable MMDE given the corresponding input image and the meter bar. We then calculate the selection rate for each method, representing the proportion of selected results for this method out of the total number of selections. So far, we have received 45 responses with the overall results in Table D. Further, we break down the results according to the maximum value of the meter bar as in Tables E-G.
These results indicate that our MfH method achieves the highest selection rate across all depth ranges, demonstrating its robustness. Metric3D-v2 also performs well, securing the second-highest selection rate. In contrast, other methods shows variability in performance across different depth ranges. For example, DepthAnything-N has a high selection rate for short-range inputs but is not selected in inputs with larger maximum depths. This is probably due to its scene dependency. Since it is trained on NYUv2, an indoor scene dataset, its MMDE ability focus more on short-range scenes. In our revised manuscript, we will include all MMDE results (also as qualitative comparisons), these quantitative results, and discussions. We will also keep updating the results with more responses received.
We hope this user study, along with Tables 1-2 in the main paper, and Table R1 and Figure R2 in the rebuttal PDF, offers a more comprehensive comparison between our MfH and recent advanced metric depth prediction methods. We sincerely hope this addresses your concerns regarding in-the-wild comparisons, and will appreciate it if you could kindly reconsider the rating. Thank you.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 12.6% | 6.3% | 3.6% | 18.2% | 6.1% | 5.4% | 0.8% | 4.3% | 42.6% |
Table D. Overall selection rate as the most reasonable MMDE result.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 4.0% | 13.8% | 0.0% | 18.2% | 5.3% | 12.0% | 1.8% | 3.6% | 41.3% |
Table E. Selection rate as the most reasonable MMDE result for short-range (10m-15m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 17.8% | 3.2% | 1.9% | 16.2% | 6.7% | 2.5% | 0.3% | 5.4% | 46.0% |
Table F. Selection rate as the most reasonable MMDE result for medium-range (20m-40m at max) inputs.
| | DepthAnything-K | DepthAnything-N | Metric3D-v1 | Metric3D-v2 | UniDepth-C | UniDepth-V | ZeroDepth | ZoeDepth-NK | MfH (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Selection Rate | 14.4% | 2.2% | 11.1% | 21.7% | 6.1% | 2.2% | 0.6% | 3.3% | 38.3% |
Table G. Selection rate as the most reasonable MMDE result for long-range (80m at max) inputs. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their insightful feedback. They acknowledge the results as showing clear improvements (4QJ3), impressive (bPK1), superior (2dzs), and decent (xSFg). Reviewer 4QJ3 further finds our key idea creative and well-motivated while reviewer bPK1 sees our method establishing a new paradigm.
We include Table R1 and Figures R1 and R2 in the PDF file attached, showing more complete comparisons with state-of-the-art models, performance comparisons with respect to different types of shots, and more in-the-wild qualitative results respectively. Below we separately address concerns raised in the reviews. We hope our responses could clarify your confusion, and are more than happy to provide further explanations if needed.
Pdf: /pdf/7d31be62495144abe56d1439a123399c0276908b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Knowledge Composition using Task Vectors with Learned Anisotropic Scaling | Accept (poster) | Summary: This paper presents a method called aTLAS, which leverages task vectors to enhance transfer learning in neural networks. Task vectors represent the difference in weights between a pre-trained model and its fine-tuned variant. aTLAS introduces anisotropic scaling to these task vectors by learning different coefficients for each parameter block, which allows for more effective composition of knowledge from different tasks. The method is tested in various scenarios, including task arithmetic, few-shot recognition, and test-time adaptation, demonstrating improvements in performance and efficiency.
Strengths: - An interesting way of using task vectors: the paper extends the concept of task vectors by introducing anisotropic scaling, which enhances the flexibility and effectiveness of knowledge transfer. This approach allows for fine-grained control over the composition of different task vectors, leading to better performance in multi-task settings.
- Evaluation and analysis: the method is evaluated across multiple tasks, including task arithmetic, few-shot learning, and test-time adaptation. The comprehensive set of experiments provides strong evidence for the effectiveness of the proposed approach. It provides insights on the behavior and importance of various parts of the neural network during task vector composition.
- Parameter efficiency: aTLAS is shown to be a parameter-efficient method for fine-tuning, which is particularly valuable in scenarios with limited data. The ability to achieve high performance with fewer learnable parameters is a significant advantage for practical applications.
Weaknesses: - **Only CLIP?** the method is primarily tested on the CLIP model and might not directly generalize to other architectures without significant modifications. Future work should explore the applicability of aTLAS across a broader range of model architectures.
- **Computational complexity (is this scalable)?** While the method is parameter-efficient, the computation of task vector compositions during training can still be resource-intensive, especially for large models. Strategies to optimize this process or reduce its computational footprint would be beneficial. The method's scalability with an increasing number of task vectors is not fully explored. While the paper shows that performance improves with more task vectors, it is unclear how this scales with very large sets of task vectors or in more complex multi-task environments. Also, earning different coefficients for each parameter block introduces additional complexity. The benefits of this complexity should be weighed against the potential for simpler approaches that might achieve similar results with less computational overhead.
- **How exactly should one select the task vectors?** The selection of task vectors for composition can significantly impact performance. The paper discusses various selection strategies but does not provide a definitive approach. Further research into more sophisticated selection mechanisms could enhance the method's robustness and effectiveness.
- **How to leverage this in real world?** The experiments are conducted in controlled settings with well-defined datasets. Evaluating the method's performance in more diverse and real-world scenarios would provide a better understanding of its practical applicability and limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are encouraged that the reviewer find our method interesting and the experiments comprehensive and insightful. We are thankful for the feedbacks. Below, we address the questions and concerns.
**W1. Applicability across different model architectures**
For task arithmetic, we included results with ViT-{B/32, B/16, L/14} backbones (Table 1, 2) following previous practice (Ilharco et al., 2023 and Ortiz-Jimenez et al., 2023). For transfer learning applications including few-shot adaptation and test-time adaptation, we additionally showed results with ResNet-{50, 101} backbones besides ViTs (Table 13). Beyond CLIP models, Ilharco et al. (2023) showed that the properties of task vectors also apply to GPT-2 and T5 models. We believe task vector composition can also be applied to these models and plan to investigate in future work.
**W2. Scalability**
Research on larger pools of task vectors remains part of the future work. In such scenarios, LoRA task vectors emerge as the current optimal solution for mitigating memory usage. As shown in Fig. 6a and Table 14 in Appendix F, LoRA task vectors substantially reduce memory requirements while maintaining an accuracy score comparable to full task vectors, demonstrating their feasibility and potential for large number of task vector. This finding underscores the potential of LoRA task vectors as a memory-efficient solution for exploiting extensive collections of task vectors.
**W3. Definitive approach on task vector selection**
Thank you for pointing this out. We find in Fig. 5 that block-wise gradient-based selection is the best strategy, especially with low budget on task vectors. Therefore, it is our recommended approach and have been explicitly stated at L213 in our revision.
**W4. Applying aTLAS to the real world**
We have demonstrated strong results of our method in five applications. In particular, few-shot adaptation, parameter-efficient fine-tuning, etc., have direct use cases in real-world scenarios. We also conducted experiments across 22 datasets that cover a wide range of domains, in order to test its general practicality. We believe the observations we made on these datasets can reasonably reflect the challenges in the real world.
References:
- Ilharco et al. Editing models with task arithmetic. ICLR'23
- Ortiz-Jimenez et al. Task arithmetic in the tangent space: Improved editing of pre-trained models. NeurIPS'23. | Summary: The paper introduces a method named aTLAS, which leverages task vectors and anisotropic scaling to enhance knowledge composition and transfer in pre-trained models. The authors investigate whether components of task vectors, particularly parameter blocks, exhibit similar characteristics and how these can be used to improve knowledge composition and transfer. The effectiveness of the proposed method is demonstrated in various tasks such as task arithmetic, few-shot recognition, and test-time adaptation.
Strengths: - The introduction of anisotropic scaling at the task vector level is novel and offers higher controllability in model behavior, particularly for task addition and negation.
- The method is thoroughly validated across multiple tasks and datasets, showing significant improvements in performance.
- aTLAS demonstrates strong parameter efficiency, making it suitable for scenarios with limited data.
- The method complements existing few-shot adaptation techniques, leading to additional improvements in performance when combined.
Weaknesses: - Knowledge composition and transfer are limited to the specific pre-trained model architecture, which may restrict its applicability across diverse model architectures.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the method handle potential conflicts when combining task vectors from very dissimilar domains? Will it negatively affect performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Knowledge composition and transfer are limited to the specific pre-trained model architecture, which may restrict its applicability across diverse model architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for recognizing the novelty, parameter efficiency and thorough experimentation in our work, and we are thankful for the feedbacks. We now address the questions and concerns as follows.
**W1. Knowledge composition and transfer are limited to specific pre-trained model architecture.**
While the experiments primarily use ViT-B/32 as the visual encoder, we showed that the proposed method works consistently across other architectures including ViT-B/16, ViT-L/14 (Table 1, 2), ResNet50 and ResNet101 (Table 13). Nevertheless, we acknowledge that aTLAS cannot combine task vectors obtained from different architectures, or transfer the knowledge in task vectors to a different architecture. This may require finding appropriate projections, and remains part of the future work.
**Q1. Task vectors from dissimilar domains.**
We studied the potential conflicts between task vectors using disentanglement error (Ortiz-Jimenez et al., 2023), as shown in Figure 3 of the paper. Specifically, Each row reflects the percentage of data in the corresponding dataset that have altered predictions after combining two task vectors. Therefore, low disentanglement error indicates low interference between task vectors. More importantly, this interference also depends on the dataset. For two datasets A and B, combining their corresponding task vectors into one model may hurt the performance on one dataset more than the other.
Our method, particularly with standard task vectors (Figure 3c), generally decreases the disentanglement error across all pairs, which very effectively reduces the conflict between task vectors, whether they are obtained from similar or dissimilar domains.
References:
- Ortiz-Jimenez et al. Task arithmetic in the tangent space: Improved editing of pre-trained models. NeurIPS'23. | Summary: The paper enhances the performance of task arithmetic, a recent model editing technique based on weight interpolation, in vision-language models. Instead of the original task- and parameter-independent scaling coefficients of the task vectors, it proposes to learn anisotropic scaling coefficients from validation data, resulting in significant performance improvements, particularly in task addition. The method also proves effective in few-shot learning and test-time adaptation scenarios and as a parameter-efficient fine-tuning (PEFT) method in low-data regimes.
Strengths: - **Originality:** While some parts of the method are not entirely novel (see Weaknesses section), overall, aTLAS goes beyond previous work. The few-shot adaptation application and its relation to PEFT (e.g., using LoRAs as task vectors) are also original contributions to the field of model editing/merging.
- **Quality:** The work is generally of good quality.
- **Significance:** Editing foundation models is an emerging field with promising real-world impact. The results obtained with the proposed method are very good. Specifically, the authors merge the parameters of 8 ViT-L-14 CLIP models while retaining 97.07% of the performance of the single models (see Tab. 2).
Weaknesses: **Originality and references to previous work**
1. The idea of learning task-wise and layer-wise scaling coefficients for the task vectors is one of the main features of Adaptive Model Merging (*AdaMerging*, Yang et al., 2024). This work, which is not currently referenced, must be duly cited, along with a detailed discussion of the similarities and differences between aTLAS and AdaMerging.
2. Similarly, the idea of using test-time adaptation techniques such as entropy optimization was also present in Yang et al. (2024).
**Clarity and missing details**
3. The paper does not report the essential methodological details of aTLAS. Specifically, the loss, optimizer, learning rate, and hyperparameters for learning the scaling coefficients are missing.
4. The disentanglement error (line 143) must be defined and briefly explained. Moreover, as it seems to slightly differ from the original metric defined in Ortiz et al. (2024) – as likely it is computed only for the best set of scaling coefficients – this difference should also be mentioned.
5. In lines 153-155, some citations could be added, e.g., to the fact that the representation built by neural networks is often hierarchical and increases in complexity with the layer depth.
6. The experiments with ResNet backbones are never mentioned in the main text.
7. A comparison of the computational costs compared to standard task arithmetic is missing and should be provided.
Yang, E., Wang, Z., Shen, L., Liu, S., Guo, G., Wang, X. and Tao, D., 2024. AdaMerging: Adaptive model merging for multi-task learning. The Twelfth International Conference on Learning Representations.
Ortiz-Jimenez, G., Favero, A. and Frossard, P., 2024. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 36.
Technical Quality: 3
Clarity: 3
Questions for Authors: 8. **Multi-task learning vs. learning the coefficients.** Differently from the original task arithmetic technique, aTLAS involves learning the optimal merging coefficients for each task and block from data. How would the performance of fine-tuning the pre-trained model on the same number of data and for the same number of steps compare with your task addition results? What about the accuracy of the model on a control dataset that was not used during fine-tuning, e.g., ImageNet?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper adequately discusses its limitations. I do not foresee any potential negative societal impacts arising from this study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of the novelty, quality and impact of our work, and we are thankful for the feedbacks. In what follows, we now address the questions and concerns.
**W1. Comparison to AdaMerging (Yang et al., 2024)**
We thank the reviewer for suggesting this comparison, and have added this in our revision (Section 3, Tables 2 and 3) with appropriate citations. In short, the idea of anisotropic scaling is a more general formulation, while layer-wise scaling in AdaMerging is a specific variant. Besides, AdaMerging is designed for model merging, similar to task addition, which is one of the five applications we investigated. Below, we detail the key differences between our work and AdaMerging.
- **Formulation**. We formulated anisotropic scaling as $\Lambda \mathbf{\tau}$, where $\mathbf{\tau}$ denotes a task vector and $\Lambda$ is a block-diagonal scaling matrix. When parameters in the same layer share one scaling coefficient, this formulation specializes to AdaMerging. However, for complex use cases such as parameter-efficient fine-tuning (Sec. 6.2 and Appendix F), we show that scaling different rows, columns or random partitions of weight matrices differently is necessary to increase the representation power. This variant can also be captured by our formulation. In addition, AdaMerging constrains the learned coefficients to be in $[0,1]$ while aTLAS does not. We observe that larger or negative coefficients can be beneficial for task addition in Fig. 10. **The formulation of anisotropic scaling therefore covers a wider range of use cases**.
- **Task vector variants**. We additionally studied combining linearized task vectors (Ortiz-Jimenez et al., 2023) in Section 4 and Appendix D.3, and using LoRAs as sparse task vectors to reduce memory consumption (Section 6.1).
- **Training objectives**. AdaMerging uses entropy as an unsupervised objective, while we experimented with constrastive, regularized entropy and pseudo-labelling objectives in Section 5.3. Note that our unsupervised pseudo-labeling algorithm UFM outperforms the entropy objective for test-time adaptation (Table 3).
- **Applications**. AdaMerging focuses on model merging, whereas our work investigates additional applications such as task negation (Sec. 4.1), few-shot recognition (Sec 5.1), test-time adaptation (Sec 5.3) and parameter-efficient fine-tuning (Sec 6.2), which, as noted by the reviewer, are original contributions to the field.
- **Novel insights for model merging**. As discussed in L141-146, anisotropic scaling reduces interference when merging task vectors (Fig. 3), a novel insight uncovered in our paper, highlighting its advantage over isotropic scaling and negating the need for linearized task vectors.
Last, we kindly note that AdaMerging's publication in ICLR'24 coincides with the submission period of NeurIPS'24. Nevertheless, we have added comparison and acknowledgements where appropriate throughout the paper and thank the reviewer for the suggestion.
**W2. Entropy optimisation in AdaMerging**
We thank the reviewer for pointing this out and have added a discussion in L224 to acknowledge this. We highlight that UFM, the pseudo-labelling algorithm we designed, outperforms (regularized) entropy optimisation (SAR, Table 3). AdaMerging has also been included as a baseline in this table.
**W3. Technical details**
They can be found in Appendix A for task addition or negation; Appendix D.1 and D.2 for few-shot learning. In our revision, we have added hyperlinks at L160 for clarity.
**W4. Disentanglement Error**
The reviewer is correct that the disentanglement error is evaluated at the optimal coefficients. We have added detailed mathematical formulas in our revision for clarity, as follows
$\xi(\tau_1, \tau_2) = \textbf{E}_{\mathbf{x} \in \mathcal{D}_1} \big[ \delta \big( f(\mathbf{x}; \mathbf{\theta}_0 + \Lambda^\star_1 \mathbf{\tau}_1), f(\mathbf{x}; \mathbf{\theta}_0 + \Lambda^\star_1 \mathbf{\tau}_1 + \Lambda^\star_2 \mathbf{\tau}_2) \big) \big]$,
where $\Lambda^\star$ denotes the optimal coefficients and $\delta(x_1, x_2)$ is a distance function that returns 1 if $x_1 \neq x_2$ and 0 otherwise.
**W5. Citations**
We have added references to Yosinski et al. (2014) and Kornblith et al. (2019) in our revision.
**W6. ResNet Experiments**
We have added a hyperlink to the results with ResNets (Table 13) in our revision.
**W7. Computational cost**
Standard task addition performs hyper-parameter search on one scaling coefficient ranging from 0 to 1, with an interval of 0.05, and therefore runs inference 21 times. Our method is trained for 10 epochs. Using an RTX 4090, our method takes 12min to train while the hyper-parameter search takes 20min.
**Q1a. Comparison against multi-task fine-tuning**
As shown in figure in the attached PDF, our method consistently outperforms multi-task fine-tuning using different percentage of the validation data, since the training data is considered unavailable. However, with more data, fine-tuning is expected to achieve better performance.
**Q1b. Performance on a control dataset**
With our method, the merged model achieves 58.1% accuracy on ImageNet, which retains 91.6% of the zero-shot accuracy 63.4%. In comparison, the model from multi-task fine-tuning achieves 57.3% accuracy on ImageNet, which shows our method generalizes better.
References:
- Yang et al. AdaMerging: Adaptive model merging for multi-task learning. ICLR'24
- Ortiz-Jimenez et al. Task arithmetic in the tangent space: Improved editing of pre-trained models. NeurIPS'23.
- Yosinski et al. How transferable are features in deep neural networks? NeurIPS'14.
- Kornblith et al. Similarity of neural network representations revisited. ICML'19.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, which successfully addressed all of my concerns. I encourage you to include the additional discussions in the revised version of your submission. Overall, I believe this paper merits acceptance, and I have adjusted my initial score accordingly. Best wishes. | null | null | Rebuttal 1:
Rebuttal: We would like to thank each reviewer for dedicating their time to reviewing our paper. We are encouraged that the reviewers find our work novel/original (Reviewers [RiZ9](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM), [GVMe](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM)), our experiment results thorough (Reviewers [GVMe](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM), [TrF6](https://openreview.net/forum?id=G9OJUgKo4B¬eId=UDzo9FVMVM)) and impactful ([RiZ9](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM)). In particular, reviewer [RiZ9](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM) appreciates our contribution in applications such as few-shot adaptation and parameter-efficient fine-tuning, which goes beyond previous work. Reviewers [GVMe](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM) and [TrF6](https://openreview.net/forum?id=G9OJUgKo4B¬eId=UDzo9FVMVM) appreciate that our experiments provide insights on the behaviors of task vector compositions, such as our method being complementary to existing few-shot methods.
In response to the question from reviewer [RiZ9](https://openreview.net/forum?id=G9OJUgKo4B¬eId=PnDeqZQ2DM), we ran experiments to compare our method aTLAS against multi-task fine-tuning, and have included the new results in the attached PDF. Specifically, we show that our method outperforms multi-task fine-tuning significantly and consistently when using different percentage of data. The validation sets are used in this case following previous practice by Ilharco et al. (2023), because the training data for foundational models may be, and often is unavailable.
We hope these results answer the question. We remain available throughout the discussion period in case of further questions and discussions. Thank you again for the feedbacks.
References:
- Ilharco et al. Editing models with task arithmetic. ICLR'23
Pdf: /pdf/6e95a6c1f06e12be9bf8e23b1e006b1ee07d0276.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Relational Concept Bottleneck Models | Accept (poster) | Summary: The authors propose Relational Concept Bottleneck Models, a family of relational deep learning methods that utilize concept bottleneck models to provide interpretable task predictions. R-CBMs are shown to predict well in various settings, matching the performanace of black-box models.
Strengths: The paper studies an important problem
The paper is clearly written and experimental results are easy to follow
The results show strong and consistent improvements
Weaknesses: It might be nice to include a comparison to a simpler graph-based method that is not black box, e.g. [C&S](https://arxiv.org/abs/2010.13993) and a discussion of how the choice of relational task predictor and aggregation affect interpretability
Would be interesting to see how R-CBM compares with CBM on datasets that aren't specifically designed for relational prediction.
Technical Quality: 3
Clarity: 4
Questions for Authors: What is the explanation for why R-CBM-Linear does not effectively respond to interventions for RPS?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *R-CBM vs CBM comparison on non-relational datasets:*
**Vanilla CBMs are special cases of R-CBMs in non-relational domains (as described in L158-167)**, hence R-CBMs’ results on non-relational datasets (where predicates are unary) would be identical to propositional CBMs’ results (architecture and losses are both identical).
*Why does R-CBM-Linear not effectively respond to interventions for RPS?*
**The RPS task is a non-linear combination of concepts, so it can’t be solved with a linear model such as R-CBM-Linear.** When using non-linear task predictors (R-DCR or R-CBM-Deep), the task can be solved, and interventions become effective.
*Comparison with a simpler graph-based method that is not black box:*
**We thank the reviewer for the suggestion. We have added the comparison with the method Correct and Smooth (C&S) as suggested (cf. Table A of attached PDF).** Please, see also the common answer on baselines for more details. We added this baseline in Table 1 in the revised version of our paper.
---
Rebuttal Comment 1.1:
Title: Thanks for your comments
Comment: I thank the authors for their response, but maintain my rating of 7 | Summary: This paper proposed a Relational Concept Bottleneck Models(R-CBM), which merge CBMs and GNNs together. To be more specially, it encode atom into concept like CBM and did message passing afterwards like GNN.
Strengths: 1. The idea of combining GNN and CBM is novel and it enable the CBM to learn relational data.
2. The results of the experiment are somewhat competitive.
3. Relative clarity in the articulation of the motivation for the experiment
Weaknesses: 1. It seems like the CBM provide a initialization of GNN model, therefore, it is not enough to compare with only CBM model as baseline. Because essentially, this is a GNN model. I think we should provide baselines for different GNN + different initialization as comparison.
2. Concepts in CBM are clearly defined. Can the author clearly defined what are the concepts in each of your dataset? When the dataset become larger, how to define your concepts?
3. I think some good property that R-CBM learned like example 3.4 and 3.5 could be formalized better.
4. Table 4 list some easy relations that R-CBM learned. While I think the power of R-CBM should be learning relations that people could not identify. Could you show some other complex relations?
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Table 4 lists easy relations that R-CBM learned. Could you show complex relations?*
**Please notice that Table 4 shows rules learnt by R-DCR, other R-CBMs do not learn rules, but rather enable concept interventions (which is the main purpose of concept-based models).** R-DCR is the relational adaptation of Deep Concept Reasoner (DCR) \[1\], which was a method designed to provide **simple** **instance-based rules to aid interpretability** (cf. with section “Rule parsimony” in the original DCR paper \[1\]). In DCR, the complexity of the rules could be controlled with a hyperparameter $\\tau$. Modifying this hyperparameter leads to more complex rules, even though this is in direct opposition with the aim of interpretability. For instance, in RPS we obtained two different equivalent rules tuning this hyperparameter:
$\\tau=1$: wins(player1) $\\leftarrow$ $\\neg$rock(player1) $\\land$ paper(player1) $\\land$ $\\neg$scissors(player1) $\\land$ rock(player2) $\\land$ $\\neg$paper(player2) $\\land$ $\\neg$scissors(player2)
which is equivalent but longer than the following:
$\\tau=100$: wins(player1) $\\leftarrow$ paper(player1) $\\land$ rock(player2)
In KGs, by tuning this hyperparameter we also found more complex rules with redundant terms such as (notice that neighborOf(X,Y) and locatedIn(Y,W) provide further relevant evidence but are redundant):
locatedIn(X,Z) $\\leftarrow$ neighborOf(X,Y) $\\land$ locatedIn(Y,W) $\\land$ locatedIn(X,W) $\\land$ locatedIn(W,Z)
And we also found longer chains such as:
locatedIn(X,W) $\\leftarrow$ neighborOf(X,Y) $\\land$ neighborOf(Y,Z) $\\land$ neighborOf(Y,Z) $\\land$ locatedIn(Z,W)
*It seems like the CBM provides an initialization of the GNN model & GNN baselines:*
* **R-CBMs are not providing the initialization of the GNN model. On the contrary, GNNs provide the initialization of the R-CBM bottleneck.** To clarify the relation between R-CBMs and GNNs consider the following analogy with the non-relational setting. A CBM \[12\] can be seen as composed of three functions $g’ \\circ g’’ \\circ f$: the input encoder $g’: x \\rightarrow h$ (mapping raw features to embeddings), the concept predictor $g’’: h \\rightarrow c$ (mapping embeddings to concepts), and the task predictor $f: c \\rightarrow y$ (mapping concepts to tasks). In the non-relational setting and in the image domain, ResNets are a common function $g’$ in the literature. In the relational setting, our input encoder $g’$ is composed of GNN layers. As a result, R-CBMs do not provide initialization for the GNN, but rather the GNN provides the input for the atom predictor of the R-CBM.
* **For a fair comparison, we compared R-CBMs with equivalent black-box baselines following the CBM literature.** For instance, in the non-relational settings, the original CBM paper (\[12\]) compared a ResNet (input encoder) + MLP (task predictor) black box baseline with a CBM composed of a ResNet (input encoder) + linear layer (concept predictor) + MLP (task predictor). Following this, in our experiments we compared a GNN (encoder) + MLP (task predictor readout) black-box baseline with a R-CBM composed of GNN (encoder, the same of the black box) + linear atom encoder (concept predictor) + MLP (task predictor readout).
* **We also provided additional black-box baselines in KGs’ experiments (Table 2 in submitted paper) and we also compared with an additional graph-based method that is not black-box (cf. with C&S in Table A of the attached pdf).**
---
Rebuttal 2:
Comment: Please let us know if you have any further questions or things we could clarify further. If not, we would appreciate if you could consider updating your review based on our replies. | Summary: The paper introduces Relational Concept Bottleneck Models (R-CBMs), which address the challenge of designing interpretable deep learning models that operate in relational domains. Existing Concept Bottleneck Models (CBMs) are interpretable but lack the capability to manage relational data, while Graph Neural Networks (GNNs) can handle relational data but are not as interpretable. R-CBMs integrate the strengths of both by allowing interpretability in a relational context. They achieve this by mapping input features to a set of human-understandable concepts, then using these concepts to make predictions. The paper evaluates R-CBMs across various experimental settings, demonstrating that they match or exceed the generalization performance of existing relational models, support quantified concept-based explanations, respond effectively to test-time interventions, and perform robustly in challenging scenarios like out-of-distribution testing and limited data availability
Strengths: 1. The integration of GNN with CBM is novel.
2. The generalization and efficiency experiment shows impressive result. This is important if we can extend CBM's interpretability to OOD problems.
3. The writing is excellent. Overall this is an enjoyable read. The authors clearly discuss the reasons of each component very clearly. With examples, they clearly articulate the aim of the paper.
4. The experiments were extensive, though real-world imaging datasets were not used.
Weaknesses: 1. Related work is insufficient but should have included the interpretable methods to fix this issue. Here are the two papers, the authors should include:
* [Posthoc based]
[1] Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat. Ghosh et al. ICML 2023
[2] POST-HOC CONCEPT BOTTLENECK MODELS. Yuksekgonul et al. ICLR 2023
* [CLIP based]
[1] Label-Free Concept Bottleneck Models. Oikarinen et al.
[2] Visual Classification via Description from Large Language Models. Menon et al. ICLR 2023
* [Relational CBM]
[1] Relational Concept Based Models. Barbiero et al. arXiv 2023.
[2] Interpretable Neural-Symbolic Concept Reasoning. Barbiero et al. ICML 2023.
2. Insufficient baselines. The authors should comapre with atleast one of the posthoc based baselines. Also original CBM is not SOTA anymore as there are multiple variants of CBM.
3. Do the authors assume to have concept annotation? this is expensive. Can it be aliviated with the LLMs to constuct the concepts. There are existing works which include the LLM to deduce the concepts.
4. Currently the community is concerned about the incompleteness of the the concept annotations. So with the assumption of having concept annotation the authors fail to answer this question? How to address this incompleteness?
5. CBMs can be used to find easy/hard samples. With relational model this can be done. The authors can perform an experiment for finding out the easy/hard samples. They can see Route, Interpret, Repeat paper.
6. How to quantitatively evaluate the concepts extracted? The authors should have computed the concept completness scores (Yeh at al.) to do so.
7. How to extend this work for imaging datasets like scene graph understanding?
Technical Quality: 3
Clarity: 4
Questions for Authors: See Weaknesses
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Concepts’ evaluations, like e.g. concept completeness:*
**We report the completeness scores of each concept-based model wrt the relational baseline**, following Equation 1 in \[Yeh, et al.\]. The results are shown in Table D of the attached pdf (we added this result in Table 6, Section 5). An evaluation of concept efficiency was already in Table 5 in the submitted paper (showing the impact of reducing the number of concept/task supervisions during training).
*Can R-CBMs be used to find easy/hard samples?*
**Yes, R-CBMs can be used for this.** The methodology presented in the “Route, Interpret, Repeat” paper is a technique which is not specific to a particular CBM, and it can be adapted to our methodology as well. In our framework, we consider as hard examples the ones whose prediction is highly uncertain when using a CBM with a propositional template (see CBM-Deep rows in Table C of the attached pdf). When using a relational template, instead, we verified that the distribution of the prediction uncertainty significantly decreases. We show this in Figure A (attached pdf) where the prediction uncertainty decreases when transitioning from a propositional to a relational template. Table C (attached pdf) shows the concept/task activations for the hardest example to classify using the propositional template (high uncertainty) and the corresponding predictions when using the relational template (low uncertainty). We added this analysis Appendix A.6 of the revised version of the paper.
*References for related works:*
**We thank the reviewer for the suggestions, we added the missing relevant references** (with the only exception for “Interpretable Neural-Symbolic Concept Reasoning” \[1\] which was already part of our evaluation). Regarding post-hoc and CLIP-based CBMs as related works, we are well aware of these papers, but in our opinion these are not closely related works as they focus on ways of obtaining concept labels when such concept annotations are not available, and their extension to the relational case is not trivial, but requires significant further research as discussed in the common answer to the reviewers.
*How to extend this work for imaging datasets like scene graph understanding?*
**The “Tower of Hanoi” dataset we used in our experiments already represents an example of a setting similar to scene graphs**, where disk images correspond to objects and their relations (i.e., top(u, v), larger(u, v)) correspond to concepts.
**References**
\[Yeh, et al.\] Yeh, Chih-Kuan, et al. "On completeness-aware concept-based explanations in deep neural networks." Advances in neural information processing systems 33 (2020): 20554-20565.
---
Rebuttal Comment 1.1:
Title: Post rebuttal comment
Comment: I would like to thank the reviewer for the rebuttal. My questions are mostly answered except the LLM and incompleteness.
Recent publications aim solve this:
[1] A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis. Yang et al.
[2] CONCEPT BOTTLENECK MODELS WITHOUT PREDEFINED CONCEPTS. Schrodi et al.
[3] LLM-based Hierarchical Concept Decomposition for Interpretable Fine-Grained Image Classification. Qu et al.
I would like to remain with my score.
---
Reply to Comment 1.1.1:
Comment: **We provided the completeness scores of our method and CBM baselines in Table D** (see pdf attached to the global answer to all reviewers) and **explained the reason why concept annotations are often complete by construction in relational domains** (see common answer "Concepts’ definition/annotation for each dataset"). Please let us know if you have further questions regarding incompleteness that we have not addressed.
**Yes, LLMs might be used to construct concepts, but our paper focuses on an orthogonal research question which is: "how can relational concepts be used in a CBM setting?"**. Moreover, it is still unclear how to perform effective concept-interventions when LLMs are used to generate concepts. Indeed, even in recent papers (including the mentioned "Concept Bottleneck Models Without Predefined Concepts" [published on arxiv the last month]), interventions are limited to intervening on the task predictor's weights and on a cherry-picked selection of samples. Further research might be required to understand whether concept bottlenecks and annotations constructed with LLMs provide the same guarantees in terms of interpretability and intervention effectiveness wrt other CBMs. We consider the integration of LLMs and relational CBMs a wide topic of interest for future works.
---
Rebuttal 2:
Title: Further comments [Reviewer]
Comment: By concept incompleteness, i did not indicate concept completeness score. Concept completeness score indicates how good your concept can be a good predictor of the downstream labels. Concept incompleteness means what if your concept set is incomplete on the first hand. Look at the papers i refered.
Regarding concept completeness score, you provided in the pdf - is those numbers in percentages? usually concept completeness scores are b/w 0-1. I see some numbers are greater than 100 (e.g., 102.52). How is that possible? So, this part i consider is not rightly estimated
For the LLM point, LLM can be used to solve the question of incompleteness. and i believe at this point relational concepts can be obtained by using LLM as well, so these are not orthogonal. How to perform intervention with LLM concepts - there are papers like Language in a bottle (LaBo) (CVPR 2023). However i agree this can be explored in future. But that makes the contribution borderline.
Thanks again.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer CYvd,
We are sorry for misinterpreting your comment regarding concept incompleteness.
**According to Definition 3.1 in Yeh, et al., the completeness score can be higher than 1 (100% in our table), whenever "the best accuracy by predicting the label just given the concept scores" (the CBM's accuracy) is higher than the accuracy of the black box prediction model.** Both the numerator and the denominator are normalized wrt the "accuracy of random prediction to equate the lower bound of completeness score to 0". However, this metric is not normalized to give an upper bound in 1 in its original formulation.
Regarding concept incompleteness in the relational domain, consider that in the relational setting the issue of concept incompleteness is less demanding than in the non-relational domain. Indeed, the set of concepts is often complete by construction as both concepts and tasks can relate to the same ground predicates (this is a property of relational datasets, as we described in the example in the common answer "Concepts’ definition/annotation for each dataset"). **One of the main results of our experiments shows that in the nine relational datasets we considered, this is actually the case: relational concept bottlenecks are complete, while non-relational bottlenecks are not.**
**Regarding LLMs, we only wanted to point out that this paper aims to fill a different research gap with respect to the problem of incomplete concept annotation and the integration of LLMs with CBMs: to the best of our knowledge, this is the first paper that shows how to construct CBMs in the relational domain**. Our contribution fills this gap notwithstanding whether the set of concepts and their annotations are given from supervisions, extracted from an unsupervised method [2], or provided by an LLM [1,3]. However, we acknowledge that studying concept incompleteness and integrating LLMs with CBMs in relational domains are interesting directions for future research.
Thanks again for your feedback and your comments, they definitely helped us improving our work. | Summary: This paper introduces Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning models that can provide some degree of interpretability and explainability; R-CBMs generalise both CBMs and GNNs.
According to the authors, R-CBMs 1) match the generalisation performance of existing black-box models, 2) support the generation of logic explanations, 3) respond to test-time concept and rule interventions, and 4) generalise to out-of-distribution samples and in limited training data regimes.
R-CBMs represent a relational graph as a set of atoms and dependencies among atoms and a directed hypergraph: each hyperedge defines a relational concept bottleneck from several ground atoms to a destination ground atom. Fig. 2 provides an example: the atom p4(b) can be predicted from the atoms [p3(b), p2(a, b), p1(b, a)], and these atoms all belong to the same hyperedge. What is the difference between the hyperedge notation and writing this as a Horn clause, e.g. p4(b) :- p3(b), p2(a, b), p1(b, a)?
R-CBMs are composed of a few components: 1) an atom encoder/predictor, 2) a message-passing component that updates the atom representations based on the dependency hypergraph, and 3) a relational task predictor. Having GNN-like message-passing components for updating the embeddings and the predictions of the atoms, in my opinion, may invalidate the interpretability claims of R-CBMs since GNNs are intrinsically black-box neural models.
One of my concerns in this work is that there are already models that can learn and leverage Horn rules-like structures for interpretable relational predictions, such as Neural Theorem Provers (e.g., https://arxiv.org/abs/1705.11040, https://arxiv.org/abs/2007.06477) -- what's the delta between R-CBMs and NTPs?
NTPs are really similar to the proposed approach -- for example, R-CBMs use "max" as the aggregation function to decide which hyper-edge to use when making a prediction, which is exactly the same strategy used by NTPs for deciding which "proof path" to use to prove whether a given atom is true or not.
Experiments -- probably the most used dataset for relational link prediction in Knowledge Graphs is FB15k-237; why did you use WN18RR instead? Are there scalability issues due to the higher number of atoms or relational predicates in FB15k-237? Also, the paper introducing WN18RR was not cited (as well as several papers introducing the baselines).
For WN18RR, the paper proposes an apparently wide set of baselines. However, extremely simple but effective baselines like ComplEx-N3 (https://github.com/facebookresearch/kbc/, https://arxiv.org/abs/1806.07297, ICML 2018) produce more accurate results than the proposed method (0.58 Hits@10 vs 0.56).
I really loved Tab. 4 with the examples of learn (symbolic) rules; however, learning symbolic rules is something that NTPs can also do -- can you please expand on the delta between R-CBMs and NTPs? Also, it's not completely clear to me how rules are learned -- is it because the encoder/predictor can be used to learn a dependency hypergraph that can be used to learn hyper-edges (the Horn clause-like structures)?
Update after reading the rebuttal: the authors addressed most of my concern, I'm increasing my score!
Strengths: 1) Interesting work with some degrees of interpretability/explainability!
2) Results look robust for a non-factorisation-based method (I'm referring mainly to WN18RR since other link prediction datasets tend to be mostly solved/saturated)
3) Wide array of graph learning tasks
Weaknesses: 1) It is not clear if it's reinventing Neural Theorem Provers (e.g., https://arxiv.org/abs/1705.11040, https://arxiv.org/abs/2007.06477), which can also be used to learn FOL rules via back-prop
2) Why WN18RR instead of, e.g., FB15k-237? Missing very simple but effective link prediction baselines, e.g. https://arxiv.org/abs/1806.07297 (ICML'18)
3) Writing is a bit opaque -- why use "hyper-edges" to introduce Horn clauses?
4) Given all the GNN-like components inside R-CBMs, are they really interpretable?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Can you clarify how the learning of the symbolic rules happens? Is it by learning a link predictor in the atom encoder module that can be used to extract hyper-edges/Horn clauses?
2) Any answer addressing the "Weaknesses" would be really helpful
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1) Are there scalability limitations of the proposed model? E.g., would you be able to evaluate on FB15k-237?
2) What if you need more than one application of the rules? NTPs can handle that by applying them recursively
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Relations with Neural Theorem Provers:*
**More scalable variations of NTPs, such as CTP and Minerva, have been proved to be weaker baselines than other baselines (e.g., RNNLogic) that we considered in the experiments, e.g. the results in Table 3 from the RNNLogic paper \[24\]. This is why we decided to not include an older method like NTP, for which we would also miss a comparison on larger KGs, as NTP/CTP do not scale on them.** Please also note that our relational extension of CBMs is more general than NTP, or any other specific rule learner. In fact, CBMs do not necessarily learn rules (see the original paper \[12\]), and rule learning is not necessarily required for interpretability, which is usually assessed via concept interventions (Table 3), as discussed by the original CBM paper \[12\].
However, delving into more technical details, we claim that the R-CBM framework is more general and scalable than NTP for multiple reasons:
* R-CBMs are not limited to Horn clauses, for example R-DCR can learn rules with negated terms in the body.
* NTP has never been applied in classic CBMs setups where inputs are not symbolic but images.
* NTP has never been used to test interventions, which are an essential element for the interpretability of a CBM system.
* R-CBM templates can represent all the rules using a (subset of a) specified list of atoms in the body at the same time, and the embeddings will be used to determine which actual rule to instantiate in each given context. This is particularly explicit in R-DCR, where a template can form a FOL rule for a grounding and another rule for another grounding. On the other hand, NTP approach to rule learning is to enumerate all possible rules and let the learning decide which rules are useful. Please note that this approach is not scalable to larger KGs because of the combinatorial explosion of rules when there are many predicates in the dataset. Moreover, the rules in NTP are obtained after training by decoding the parameterized rules, **by searching for the closest representations of known predicates.** This is very different from R-DCR, where the rules use exactly the referred predicates and are transparently executed to get the final predictions in all the training phases (cf. with original DCR paper \[1\]).
We added a summary of the above discussion in the related works (Section 6) in the revised paper.
*Complex-N3 and FB15k-237:*
**To strengthen our results, we included Complex-N3 as baseline and added a comparison on FB15k-237 as suggested (see Table B in attached PDF)**. Please note that even if ComplEx-N3 provides competitive results, it is still a black box, and it does not directly support concept-based interventions (the main purpose of concept-based interpretability methods such as R-CBM). We added these results in Table 2 of the revised paper.
*Scalability:*
**Model inference scales as message passing (O(N\*C) where N is the graph size and C is the size of the largest clique)**. As any other method grounding on a relational domain (such as hyper-GNNs), the graph size N grows as the cartesian product of the domain in the template. As discussed in the limitations and in A.2.1 (please note that a deeper discussion is beyond the scope of this paper), there are effective heuristics to limit the size of the graph, while retaining the relevant information to solve a task, see for example \[24,33\].
*Can you clarify how the learning of the symbolic rules happens?*
**R-CBMs do not aim to learn rules, but rather to enable concept interventions (which is the main purpose of concept-based models) as discussed in the original CBM paper \[12\].** However, as R-CBMs generate symbolic concept layers, existing rule-based approaches can be applied on this layer to make interpretable predictions. In our paper, we used and compared with existing rule-based approaches including concept-based (DCR, \[1\]), NeSy-based (DeepStochLog), and KG-based (RLogic, RNNLogic, etc). Rule learning depends on the chosen method as described in the original papers respectively. For instance, DCR (and R-DCR, its relational adaptation) consists of (i) a neural module to learn the rule structure (by learning the relevance and polarity of each concept) and (ii) a symbolic module to execute the rule on the predicted concept truth values to produce the final prediction.
*What if you need more than one application of the rules?*
In R-CBMs the recursive application of the rules corresponds to repeating message-passing operations (Sec 3.2, L121).
*Why use "hyper-edges" to introduce Horn clauses?*
The hyperedge notation is more natural in the GNN community, where the current literature describes message passing operations on hypergraphs \[Feng et al.\].
*References for dataset/baselines:*
We have added to the revised paper the references when the dataset/baseline (such as WN18RR) is first mentioned in the paper.
**References:**
\[Feng et al.\] Feng, Yifan, et al. "Hypergraph neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Thanks, I will increase my score! | Rebuttal 1:
Rebuttal: **Answer to all reviewers and ACs**
--------------------------------------
We first thank the reviewers for their thoughtful and insightful feedback. We think that by working on their comments, the quality of our manuscript has certainly improved, and we hope to have addressed all the raised concerns in this rebuttal. We reply to questions shared by two or more reviewers in this comment and reply to specific questions of single reviewers in the comments under their respective feedbacks.
Summary of Changes
---------------------
In the revised version of the paper, we have included both the results of some additional experiments we conducted during the rebuttal and a few lines to better clarify some insightful points raised by reviewers. **However, the core of our work’s contribution and evaluation remains unchanged**. In the following we summarise the list of changes we have made. **Throughout the whole rebuttal we use references with letters (A-D) for tables and figures in the additional pdf page in attachment, while references with numbering refers to our original paper.** The changes are the following:
1. We added a new experiment to compare with the Correct and Smooth (C&S) method (Table A, @XAga).
2. We added a new experiment on the dataset FB15k-237, and added Complex-N3 among the baselines for the link prediction task (Table B, @WYX8).
3. We added an experiment to show how our framework can be also used to identify easy/hard samples in a dataset (Table C, @CYvd).
4. We included the completeness score of the concepts used by R-CBM (Table D, @raJm,@CYvd).
5. We included a discussion on NTP (and its extensions) in the related work section (@WYX8).
6. We added an example in Section 4 to clarify how concepts and tasks are defined in all datasets (@raJm, @CYvd).
**\# Answer to common questions**
---------------------------------
*@XAga @CYvd @WYX8–Baselines:*
**Our work already includes SotA baselines** including SotA concept-based models (Deep Concept Reasoner \[1\], which is a further evolution of Concept Embedding Models \[5\], which is a SotA CBM architecture), SotA relational reasoners (DeepStochLog using optimal, ground-truth rules), and SotA KGEs (e.g., RNNLogic and the newly added ComplEx-N3).
* Regarding relational post-hoc CBMs, they would be interesting to consider, but as far as we know, there are no papers showing how to extend post-hoc CBMs to the relational case (as far as we know we are the first authors to do this extension for standard CBMs in general). In this setting, post-hoc CBMs would require relational concept discovery. However, CAV (method used in post-hoc CBMs to find concept vectors) is not designed for relational settings. In our view, this would require non-negligible further research that is not directly related to the objective of our paper.
* We have added (in Table 1 of the revised paper) the results of the suggested GNN baseline (Correct and Smooth) on our splits of the graph benchmarks (Cora, Citeseer, Pubmed) using the same GNN backbone used for all other methods (see Table A of attached PDF).
*@XAga, @WYX8–Impact of message passing, task predictor, and aggregation on interpretability:*
**The meaning of 'interpretability' that we adhere to in this paper aligns with the standard notion used in CBMs: “Interventions make concept bottleneck models interpretable in terms of high-level concepts” (Koh et al. \[12\]) (quantitatively evaluated in Table 3)**. Hence, we notice that interpretability in CBMs (as well as R-CBMs) is not necessarily related to the transparency of the task predictor, but rather depends on the fact that both the input and the output of the task predictor are interpretable units of information (e.g., concepts and tasks). Similarly to ResNets in the non-relational case, GNNs are considered black boxes because message-passing layers are usually mappings between non-interpretable features (e.g. raw features or embeddings). However, R-CBMs’ message-passing propagates interpretable concepts. This enables to enlighten all message passing steps (over possibly multiple iterations), similarly to how concepts enable to enlighten the decision process of the task predictor in non-relational CBMs. In our experiments, we also considered fully-transparent task predictors based on DCR, which learns logic rules to produce task predictions based on concept activations (cf. L198-208 and paragraph “Models” in Section 4 and A.4).
*@raJm, @CYvd–Concepts’ definition/annotation for each dataset:*
**The concepts’ definitions for RPS and Hanoi are reported in Appendix A.1. In all the other datasets there is no explicit distinction between the set of concepts and task predicates/atoms**. In order to further clarify this, we added in Section 4 the following example: “Let us consider as an example the Countries dataset. The task “locatedIn(France, Europe)” could be inferred by the concepts “locatedIn(Italy, Europe)” and “neighborOf(Italy, France)”, i.e. “locatedIn(France, Europe) $\\leftarrow$ locatedIn(Italy, Europe) $\\land$ neighborOf(Italy, France)”. But at the same time, “locatedIn(France, Europe)” could work as a concept to predict the task “locatedIn(Spain, Europe)” in another inference step, i.e. “locatedIn(Spain, Europe) $\\leftarrow$ locatedIn(France, Europe) $\\land$ neighborOf(France, Spain)”. This shows that the same predicate (locatedIn), and possibly the same ground atom (locatedIn(France, Europe)), can be used both for concepts and for tasks. As a result, there is not any additional cost to annotate concepts as these are the same labels already present in the original dataset.”.
Pdf: /pdf/af38f323541eda1b89619eddb4527e6e838fa313.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences | Accept (poster) | Summary: This paper presents a new distributed learning method called Byz-VR-MARINA-PP, which can achieve Byzantine robustness and partial participation at once. The authors theoretically analyze the convergence and Byzantine robustness of Byz-VR-MARINA-PP. Numerical results of Byz-VR-MARINA-PP are also provided in this paper.
Strengths: The paper is generally well-written. The problem of obtaining Byzantine robustness and partial participation is important in practical applications.
Weaknesses: 1. The proposed method is a combination of Byz-VR-MARINA and clipping, and the convergence analysis is similar to that of Byz-VR-MARINA. In light of these, the novelty of the paper is limited.
2. The proposed method requires computing the full gradient with probability $p$ at each iteration, which is computationally expensive, especially when the number of training instances is large.
3. The numerical experiment in this paper is conducted on a9a and MNIST datasets. The scale of these two datasets and the corresponding learning models is quite small given the computation power of today's devices. It would be interesting to see whether the proposed method works on larger datasets and models.
Technical Quality: 3
Clarity: 2
Questions for Authors: n/a
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and time. Below we address the concerns and comments raised by the reviewer.
>**The proposed method is a combination of Byz-VR-MARINA and clipping, and the convergence analysis is similar to that of Byz-VR-MARINA. In light of these, the novelty of the paper is limited.**
We kindly ask the reviewer to check our general response.
>**The proposed method requires computing the full gradient with probability $p$ at each iteration, which is computationally expensive, especially when the number of training instances is large.**
In practice, one can replace the full gradient computation with just a larger batch computation, i.e., with probability $p$ good workers can compute a mini-batched stochastic gradient with batch-size $b’ > b$ similarly to Geom-SARAH [1]. The usage of periodic full gradient computation is a common issue of many existing variance-reduced methods. Moreover, as the reviewer acknowledged, the considered problem is challenging. Therefore, we believe that despite the mentioned limitation, our work makes an important contribution to the field, and achieving similar results without full gradient/large batch computations at all is an interesting direction for future research. We would like to highlight that we have resolved this issue, at least in practical implementation, where we propose and experimentally analyze a version of our method that works with only a mini-batch gradient oracle (see lines 305-315, 350-363 and Figure 2).
[1] Horváth, Samuel, Lihua Lei, Peter Richtárik, and Michael I. Jordan. "Adaptivity of stochastic gradient methods for nonconvex optimization." SIAM Journal on Mathematics of Data Science 4, no. 2 (2022): 634-648.
>**The numerical experiment in this paper is conducted on a9a and MNIST datasets. The scale of these two datasets and the corresponding learning models is quite small given the computation power of today's devices. It would be interesting to see whether the proposed method works on larger datasets and models.**
Thank you for your comment. As requested, we have included an experiment with a larger model and dataset, namely a heterogeneous split of CIFAR10 with ResNet18 with GroupNorm. The setup for the MNIST dataset is described in the paper.
Attached (see attached pdf in the main response), we provide a sample of these extra experiments, concretely [Shift Back + Coordinate-wise Mean] and [ALIE + Coordinate-wise Mean]. We note that the results are consistent with the ones provided in the paper, i.e., clipping performs on par or better than its variant without clipping, and no robust aggregator is able to withstand the shift-back attack without clipping. Finally, we are currently working on experiments on even larger datasets and models. If the reviewer has some concrete suggestions, we would like to hear them.
However, we also want to highlight that our work is primarily theoretical, and experiments are mostly needed to illustrate and support our theoretical findings. Moreover, closely related works also consider models and datasets of similar sizes, e.g., , e.g., (Karimireddy et al., 2021) also test their method (Byzantine-Robust Momentum SGD) at the training of MLP on MNIST and (Gorbunov et al., 2023) also test their Byz-VR-MARINA on the logistic regression for a9a dataset. Since in our work, we propose versions of these methods with clipping and partial participation, it was natural for us to consider the same tasks in the experiments.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the explanation. However, there are some remaining concerns.
1. The authors propose to use a larger batch size $b'$ to avoid the heavy computation of full gradients. However, it is uncertain how this heuristic extension affects the theoretical results. A main contribution of this work is the convergence guarantee of achieving Byzantine robustness when allowing partial participation. Will this heuristic extension damage the theoretical results?
2. I appreciate the authors providing additional experimental results on the CIFAR dataset. However, the results are far from satisfactory. The test accuracy of the ResNet-18 model on the CIFAR-10 dataset can be up to 94% when trained with momentum SGD. In the additional experiment, the test accuracy is only about 50%. Meanwhile, periodically computing full gradients or stochastic gradients with a large batch size is computationally expensive. Therefore, I am not optimistic about the proposed methods scope of practical application.
Due to the reasons above, my rating remains unchanged.
---
Reply to Comment 1.1.1:
Title: Extra clarification
Comment: We thank the reviewer for contacting us and would like to elaborate further on the concerns mentioned.
>*The authors propose to use a larger batch size $b'$ to avoid the heavy computation of full gradients. However, it is uncertain how this heuristic extension affects the theoretical results. A main contribution of this work is the convergence guarantee of achieving Byzantine robustness when allowing partial participation. Will this heuristic extension damage the theoretical results?*
Such an extension can also be analyzed if we additionally assume that stochastic gradients have uniformly bounded variance $\sigma^2$ (a classical assumption in this case for recursive variance reduction). Then, our analysis will remain almost unchanged: in Lemma D.8, $\zeta^2$ will be replaced by $\zeta^2 + \frac{\sigma^2}{\widehat{C}b'}$ (up to a constant factor). This is well-aligned with a similar term appearing in the analysis of VR-MARINA (see Theorem D.3 from [1]; in particular, for $\widehat{C} = n$ we also have $\frac{\sigma^2}{nb'}$ term). **That is, the modification of Byz-VR-MARINA-PP proposed in our response is guaranteed to converge** to some neighborhood that depends on the variance of stochastic gradients, the number of clients, and the batch size $b'$. **If the number of clients is sufficiently large, which is the case for many FL applications where client sampling is used, then this neighborhood term becomes negligible.**
>*I appreciate the authors providing additional experimental results on the CIFAR dataset. However, the results are far from satisfactory. The test accuracy of the ResNet-18 model on the CIFAR-10 dataset can be up to 94% when trained with momentum SGD. In the additional experiment, the test accuracy is only about 50%.*
This experiment's purpose was to show that our approach works for various models. We emphasize that without clipping, the method does not converge under SHB attack, not even to 50%. Recovering the best-known accuracy is not the goal of our experiments. Given the severe time limitations, we ran the methods only for $5$ epochs and did not tune the parameters extensively. Our current experiment corresponds to 200 communication rounds, and our obtained accuracy is consistent or better than other works that also use heterogeneous data split, e.g., see Figure 1 in [3], while we note that in our case, malicious workers are present. For camera-ready, we will provide experiments with 4000 communications rounds (similarly to [3]). Finally, please note that for the heterogeneous split in FL, we would generally expect much smaller final accuracy, e.g., 78% in [3].
>*Meanwhile, periodically computing full gradients or stochastic gradients with a large batch size is computationally expensive. Therefore, I am not optimistic about the proposed method's scope of practical application.*
For the experiments with neural networks, we used a heuristic extension of our method described in lines 305-315 of our paper. In particular, we used Byzantine-Robust SGD with momentum from [2] as the base method and applied our heuristic to it. This method does not require full/large batch computations at all.
**We also note that our work is primarily theoretical, and we find it unfair to give our paper a “reject” score based on the criticism of the experiments.**
---
References:
[1] Gorbunov et al. "MARINA: Faster Non-Convex Distributed Learning with Compression", ICML 2021
[2] Karimireddy et al. “Learning from history for Byzantine robust optimization”, ICML 2021
[3] Reddy et al. “Adaptive Federated Optimization”, ICLR 2021 | Summary: This paper addresses an important problem: how to achieve Byzantine robustness when the clients partially participate in distributed learning and the Byzantine clients form a majority of sampled clients in some rounds. To solve this problem, the authors propose using the gradient clipping technique to control potential disturbances caused by Byzantine clients. The proposed method, which combines Byz-VR-MARINA and gradient clipping methods, has provable convergence for general smooth non-convex functions and PL functions.
Strengths: 1. The paper is well-written and easy to follow.
2. The investigated problem, achieving Byzantine robustness with clients' partial participation, is important and not well understood in the field of Byzantine-robust distributed learning.
Weaknesses: 1. The novelty is limited since the proposed method is a simple combination of the existing method Byz-VR-MARINA and gradient clipping. The idea of using gradient clipping to bound the potential harm caused by Byzantine clients in partial participation is straightforward and not surprising.
2. In Line 225, the authors claim that ''In contrast, Byz-VR-MARINA-PP tolerates any attacks even when all sampled clients are Byzantine workers since the update remains bounded due to the clipping". Why is this the case? Can Byz-VR-MARINA-PP still converge when Byzantine clients constitute a majority of the sampled clients in all rounds? This appears counter-intuitive. I believe there should be specific requirements on the client sample sizes $C$ and $\hat{C}$ concerning the ratio of Byzantine clients $\delta_{\text{real}}$ or the upper bound ratio $\delta$, but I did not find such conditions in Theorem 4.1 and Theorem 4.2. Have I overlooked something? If I have misunderstood any aspects of the paper, please correct me.
3. Since the gradient clipping method is able to control the potential harm caused by Byzantine clients, is it necessary to use the robust aggregator $\text{ARAgg}(\cdot)$? Can the mean aggregator be used as a substitute for the robust aggregator $\text{ARAgg}(\cdot)$?
4. The recent work [1] also considers partial participation within Byzantine-robust distributed learning. The authors should discuss it in the related works part.
[1] Allouah, Y., Farhadkhani, S., Guerraoui, R., Gupta, N., Pinot, R., Rizk, G., \& Voitovych, S. (2024). Tackling Byzantine Clients in Federated Learning. arXiv preprint arXiv:2402.12780.
5. Why does partial participation show faster convergence than full participation, as depicted in the middle of Figure 1? Could the authors provide insights or theoretical explanations for this phenomenon?
6. I am curious about how the client's sample size affects the proposed method in practice. Could the authors vary the client's sample size in experiments?
7. In Definition 2.1, '($\delta, c$)-Robust Aggregator' should be '($\delta_{\max}, c$)-Robust Aggregator', as $\delta_{\max}$ represents the breakdown point of the robust aggregator, not $\delta$; please refer to Karimireddy et al. (2021).
Technical Quality: 3
Clarity: 4
Questions for Authors: My detailed questions are listed in the above section; please refer to it.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and time. Below, we address the concerns and comments raised by the reviewer.
>**The novelty is limited since the proposed method is a simple combination of the existing method Byz-VR-MARINA and gradient clipping. The idea of using gradient clipping to bound the potential harm caused by Byzantine clients in partial participation is straightforward and not surprising.**
We kindly ask the reviewer to check our general response.
>**In Line 225, the authors claim that ''In contrast, Byz-VR-MARINA-PP tolerates any attacks even when all sampled clients are Byzantine workers since the update remains bounded due to the clipping". Why is this the case? Can Byz-VR-MARINA-PP still converge when Byzantine clients constitute a majority of the sampled clients in all rounds? This appears counter-intuitive. I believe there should be specific requirements on the client sample sizes $C$ and $\widehat{C}$ concerning the ratio of Byzantine clients $\delta_{\text{real}}$ or the upper bound ratio $\delta$, but I did not find such conditions in Theorem 4.1 and Theorem 4.2. Have I overlooked something? If I have misunderstood any aspects of the paper, please correct me.**
As we explain in Footnote 3 on page 6, for our results, we need $\widehat{C} \geq \max\{1, \delta_{\text{real}}n/\delta\}$. This condition ensures that with probability $p$, honest workers are guaranteed to be in the majority during the communication round, which allows the method to “adjust” the update direction $g^{k+1}$. Regarding line 225: we meant that even if for *some communication rounds* all sampled clients are Byzantines, our method provably tolerates this.
>**Since the gradient clipping method is able to control the potential harm caused by Byzantine clients, is it necessary to use the robust aggregator $\text{ARAgg}(\cdot)$? Can the mean aggregator be used as a substitute for the robust aggregator $\text{ARAgg}(\cdot)$?**
This is an excellent question. We have an analysis showing that robust aggregation is needed only with probability $p$ (when a large number of workers participate in the communication round) for the case $C = 1$. The proof can be generalized to the case of any $C > 1$ as well, but this will decrease the probability of steps when the method actually does the progress, and as a result convergence rate will be slower.
>**The recent work [1] also considers partial participation within Byzantine-robust distributed learning. The authors should discuss it in the related works part.
[1] Allouah, Y., Farhadkhani, S., Guerraoui, R., Gupta, N., Pinot, R., Rizk, G., & Voitovych, S. (2024). Tackling Byzantine Clients in Federated Learning. arXiv preprint arXiv:2402.12780.**
Thank you for the reference; we will add it to the revised version. However, similarly to (Data & Diggavi, 2021), the method from the mentioned paper requires the number of sampled clients to be such that honest clients always form a majority (otherwise, there is a certain probability of divergence, and this probability is growing over time).
>**Why does partial participation show faster convergence than full participation, as depicted in the middle of Figure 1? Could the authors provide insights or theoretical explanations for this phenomenon?**
When the honest clients have similar data, it is natural that partial participation saves a lot of computation, and this is exactly the setup of the experiment presented in Figure 1 (data was homogeneously split between workers). More precisely, the more honest workers participate, the less noisy the aggregated vector is (since they compute stochastic gradients independently; this can be seen from our bounds as well – the larger $C$ and $\widehat{C}$ are, the fewer *communication rounds* the method requires). However, with the increase of participating clients, the total number of stochastic gradient calculations grows, i.e., the overall batch growth. From the standard practice of usage SGD, we know that for many ML problems the best (in terms of computation complexity) batch size is rarely a full batch, and very often, one can achieve reasonable results with a relatively small batch size. This is exactly what our Figure 1 shows.
>**I am curious about how the client's sample size affects the proposed method in practice. Could the authors vary the client's sample size in experiments?**
Thank you for the suggestion. We have added this case for the MNIST dataset, where we sample 1, 4, and 11 clients. We have attached (see attached pdf in the main response) several samples from these experiments on the heterogeneous split with the Shift Back attack and Coordinate-wise Mean as an aggregator. We first compare how clipping behaves across different sample sizes. We note that the smaller the sample size, the faster the convergence in terms of total computations; however, for sample size 1, the method does not converge to the same precision solution due to high noise. For the version without clipping ), we observe convergence only for sample size 11, as this is the only sample size where it is guaranteed that within each round, malicious clients cannot form a majority. Finally, we compare clipping vs. no clipping across these sample sizes. We note that the results are consistent with those provided in the paper, i.e., clipping performs on par or better than its variant without clipping, and no robust aggregator is able to withstand the shift-back attack without clipping unless malicious clients cannot form a majority.
>**Definition 2.1**
We prefer to call it $(\delta,c)$-robust aggregator to emphasize that it requires to know parameter $\delta$. We will add this remark to the final version.
---
Rebuttal 2:
Title: Response to authors
Comment: I thank the authors for their careful responses to each of my comments. The authors have addressed some of my concerns. I will respond to the authors' responses which I think should be further clarified.
A1. Although the authors highlight several technical challenges in their responses, I'm still not convinced by the novelty of the paper. Since the proposed method is a simple combination of two existing methods, I believe the analysis may not be very challenging.
A2. I believe the condition $\hat{C} \geq \max{1, \delta_{\text{real}} n / \delta}$ is critical and should not be placed in the footnote. In the current presentation of Theorem 4.1 and Theorem 4.2, there seems no guarantee that the Byzantine clients form a majority only in some communication rounds rather than in all rounds.
A3. If I understand the authors' response correctly, they confirm the need for a robust aggregator to handle Byzantine attacks, even when the gradient clipping method is used. However, I still don’t fully understand why the robust aggregator is necessary. Could the authors please clarify this further?
A5. I agree that for many machine learning problems, the optimal batch size is not a full batch but rather a smaller one. This is especially true in neural network training, where stochastic noise can help achieve better solutions with smaller generalization errors. Since the objective function in the experiments is strongly convex, does this conclusion still apply? Could the authors provide more insights into this phenomenon?
Given the comments mentioned above, I am keeping my rating unchanged.
---
Rebuttal Comment 2.1:
Title: Response to reviewer
Comment: Thank you for your comment!
Let us address each issue one by one.
A1. We are having difficulty understanding the reviewer's perspective. In our response, we have thoroughly outlined the challenges faced during the analysis and provided **detailed explanations** as to why this work is not merely a "simple combination" of existing techniques. The **reviewer acknowledges** that we have done this. Despite this, the reviewer has not offered any **specific reasoning** to support the belief that the analysis is not particularly challenging. If the analysis were indeed as straightforward as suggested, it raises the question of why this is the **first result** to achieve Byzantine robustness in the context of partial participation without relying on strong assumptions, such as additional data on the server, etc.
We believe that a scientific discussion should be based on **well-reasoned** arguments and evidence, rather than subjective feelings or impressions. We would greatly appreciate a more detailed explanation or critique so that we can engage in a meaningful and **constructive** dialogue.
A2. Please note that the condition on $\hat{C}$ is only required once every several communication rounds, and this happens with a small probability $p$. We will certainly add a footnote to clarify this point, as suggested.
Additionally, it is important to understand that we do not need a situation where the good clients form a majority in every round with absolute certainty. What is necessary is that the probability of good clients forming a majority is greater than zero. If this probability were not greater than zero, it would indicate that the Byzantine clients form a majority not just in a subset of clients (cohort) but across the entire client population. In such a case, it would be impossible to develop any effective method, as the system would be fundamentally compromised.
We addressed this crucial point at the beginning of the paper (lines 104-105), but we will ensure that it is emphasized appropriately in the camera-ready version.
A3.Let us provide a more detailed clarification on this point. Robust aggregators are designed to function effectively as long as the Byzantine clients do not form a majority, meaning that the proportion of Byzantine clients $\delta$ is within the acceptable range $\delta \leq \delta_{\max} < 0.5$. For a more precise definition, please refer to Definition 2.1 in the paper.
However, when the Byzantine clients do form a majority, robust aggregators are no longer effective, as they fail to maintain their robustness under such conditions. To mitigate this issue and handle scenarios where the Byzantine clients might dominate, we incorporate a clipping technique. This approach helps to manage the influence of outliers and reduces the potential impact of malicious clients on the overall aggregation process.
We hope this expanded explanation clarifies how our method addresses situations where the number of Byzantine clients might form a majority.
A4. Please note that our method employs variance reduction techniques for both data and client sampling. Because the method is designed to reduce variance, increasing the cohort size or batch size does not further reduce variance (as this is already achieved through the variance reduction technique). Consequently, increasing the cohort size or batch size does not lead to substantial gains in convergence. In other words, the benefits of using a larger cohort or batch are limited when variance reduction is already in place.
In this context, the situation is similar to what is observed with other variance-reduced methods. Therefore, utilizing a smaller cohort (i.e., a subset of clients) is more advantageous for minimizing communication load. By doing so, we can maintain efficiency and effectiveness while managing the overall communication overhead, which is a key consideration in practical implementations.
We hope that we have addressed all the raised issues. If all concerns have been resolved, we would appreciate it if the score could be increased. If there are still any outstanding issues, we remain open to providing further clarification. | Summary: The paper studied the federated learning problem with Byzatine clients and partial participation. The paper proposed a new algorithm called Byzantine-tolerant Variance-Reduced MARINA with Partial Participation or Byz-VR-MARINA-PP and proved its convergence upper bound when the aggregator is a $(\delta, c)$-Robust Aggregator. The key idea of Byz-VR-MARINA-PP is to use clipping. In addition, the paper also proposed a heuristic algorithm in the general case. The performance of the proposed algorithm is verified via experiments as well.
Strengths: 1. The paper proposed a new federated learning coping with Byzatine clients and partial participation. The main focus is to consider the partial participation. The algorithm is called Byz-VR-MARINA-PP and the paper proved its convergence upper bound when the aggregator is a $(\delta, c)$-Robust Aggregator.
2. Using the idea of Byz-VR-MARINA-PP, the paper proposed a general algorithm.
3. The paper is very well written. The explanation is very clear.
Weaknesses: 1. The proposed algorithm and its analysis focused on the case when the number of local update is 1.
2. There is no any analysis of the heuristic algorithm.
3. The experiments only use two simple datasets, LIBSVM and MNIST.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can the results be extended when the number of local update is larger than 1?
2. Is it possible to obtain any analysis of the heuristic algorithm under some assumptions?
3. Could you please add more experiments using harder and more popular datasets in Federated Learning?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The proposed algorithm and its analysis focused on the case when the number of local update is 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and time.
>**The proposed algorithm and its analysis focused on the case when the number of local update is 1.**
>**Can the results be extended when the number of local update is larger than 1?**
Thank you for raising this point! We understand that in a Federated Learning setting, several local steps are important. However, initially, we need to understand how to deal with partial participation for the provably Byzantine-robust training in case of 1 local update since partial participation is an interesting and complicated question on its own under the presence of Byzantine clients. Moreover, we have page limits for the paper, so we cannot cover all possible scenarios. Taking this into consideration, we leave the analysis of multiple local steps for future work since we believe a proper consideration of the effect of multiple local steps deserves a separate paper.
>**There is no any analysis of the heuristic algorithm.**
We kindly disagree that this is a weakness of our paper. We provided an extensive analysis of the proposed method Byz-VR-MARINA. The heuristic framework is an additional idea that we provide to generalize the proposed algorithm. Note that before this paper, there was no other algorithm with partial participation and with provable Byzantine robust guarantees without additional assumptions on the number of participating clients. Moreover, we have a page limit for the paper, so we cannot cover all settings and analyze all methods.
>**The experiments only use two simple datasets, LIBSVM and MNIST.**
>**Could you please add more experiments using harder and more popular datasets in Federated Learning?**
Thank you for your comment. As requested, we have included an experiment with a larger model and dataset, namely a heterogeneous split of CIFAR10 with ResNet18 with GroupNorm. The setup for the MNIST dataset is described in the paper.
Attached (see attached pdf in the main response), we provide a sample of these extra experiments, concretely [Shift Back + Coordinate-wise Mean] and [ALIE + Coordinate-wise Mean]. We note that the results are consistent with the ones provided in the paper, i.e., clipping performs on par or better than its variant without clipping, and no robust aggregator is able to withstand the shift-back attack without clipping. Finally, we are currently working on experiments on even larger datasets and models. If the reviewer has some concrete suggestions, we would like to hear them.
> **Is it possible to obtain any analysis of the heuristic algorithm under some assumptions?**
If we assume a similar condition to the smoothness of communicated vectors $g^k_i$ for good workers, we can apply a similar analysis and obtain similar guarantees for the general method. We can add such an analysis in the appendix. However, we are not aware of any examples except for Byz-VR-MARINA, for which this smoothness property holds. We tried to analyze our heuristic extension of Byzantine Robust Momentum SGD (Karimireddy et al., 2021, 2022) but faced some technical difficulties that we have not overcome yet. | Summary: This paper proposes Byzantine robust approaches in the case of partial participation
Strengths: The paper is well written, but the claims of novelty are problematic, cf bellow.
Weaknesses: The paper starts with a bold claim, "literally, *all* existing methods with provable Byzantine robustness require the full participation of clients." Such a claim overlooks tens or at least a dozen papers on Asynchronous Byzantine machine learning that have been published in the past decade. All of these asynchronous solutions allow various forms of partial participation. Such a claim has no place in a properly researched paper on the topic.
Update : While the analysis made in the paper is non-trivial, it relies on several confusions about what asynchrony means in distributed systems, in particular, any asynchronous systems allows Byzantine nodes not only to form a majority during some communication rounds, but to constitute 100% of the nodes during that round. For instance, in an asynchronous system, one update from one node can be enough to move on to the next iteration.
Technical Quality: 2
Clarity: 3
Questions for Authors: Did you compare to the literature on Asynchronous Byzantine ML?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**The paper starts with a bold claim, "literally, all existing methods with provable Byzantine robustness require the full participation of clients." Such a claim overlooks tens, if not hundreds, of papers on Asynchronous Byzantine machine learning that have been published in the past decade. All of these asynchronous solutions allow various forms of partial participation. Such a claim has no place in a properly researched paper on the topic.**
The reviewer provides a very strong and vague claim that our work overlooks “tens, if not hundreds,” of papers that address the problem of partial participation in Byzantine-robust learning **without providing even a single reference** supporting this claim. **The review completely ignores the essence of our paper – new results and algorithms.** This is not a scientific approach to writing the review.
- First of all, we never claimed that there exist no approaches considering Byzantine robustness in the context of partial participation. In contrast, we cite (Data & Diggavi, 2021), who also study this problem, and discuss the relation of our results to their ones in detail.
- Next, in the phrase "literally, all existing methods with provable Byzantine robustness require the full participation of clients", a central part for us is **provable Byzantine robustness**. The existing work (Data & Diggavi, 2021) requires that at each round, the majority of participating clients are honest, which in terms of the theoretical analysis, is very similar to the case of full participation.
- Moreover, we found **just five works** [1-5], not even ten and certainly not even close to hundreds, as the reviewer claims. However, none of these approaches are **provably** robust against Byzantine attacks when there are no additional assumptions (on some extra data or on the frequency filters). Indeed, in [1], the authors propose to use Lipschitz filter and frequency filters in order to filter out Byzantine workers. However, Theorem 4 from [1] establishes convergence only to some neighborhood that depends on the variance of the stochastic gradients and does not depend on the stepsize and number of Byzantine clients. This result is shown for homogeneous data regime, when the convergence to any optimization error can be achieved. Next, in [2, 4], the authors use additional validation data on the server to decide whether to accept the update from workers. This assumption is restrictive for many FL applications when the data on clients is private and is not available on the server. In [3], the authors propose so-called Buffered ASGD (and its momentum version) where the key idea is to split workers into the buffers and wait until each buffer gets at least one gradient update. In the case when the number of buffers is sufficiently large (should be at least $2B$, where $B$ is the number of Byzantine workers), the authors show that BASGD converges. However, this means that to make the step BASGD requires to collect sufficiently large number of gradients such that the good buffers form majority, which is closer to full participation than to the partial participation. We also found work [5], where the authors do not provide theoretical convergence analysis. We will add the discussion of work [1-5] to our paper: **this will be just a minor addition that will not change the main message of our work at all.**
- Finally, the asynchronous communication protocol does not fit the setting of our paper, where we consider clients sampling. For example, one cannot model a synchronous communication with sampling of more than 1 client each round through the asynchronous one. **Therefore, the “criticism” provided by the reviewer is completely unrelated and it does not justify such a low score.**
---
References:
[1] Damaskinos et al. Asynchronous Byzantine Machine Learning (the case of SGD). ICML 2018
[2] Xie et al. Zeno++: Robust Fully Asynchronous SGD. ICML 2020
[3] Yang & Li. Buffered Asynchronous SGD for Byzantine Learning. JMLR 2023
[4] Fang et al. AFLGuard: Byzantine-robust Asynchronous Federated Learning. ACSAC 2022
[5] Zhang et al. Anti-Byzantine Attacks Enabled Vehicle Selection for Asynchronous Federated Learning in Vehicular Edge Computing. arXiv:2404.08444
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for taking the time to reply.
For the claim "literally, all existing methods etc." to be correct, there needs to be no single other work with provable Byzantine robustness allowing partial participation. If there exists just one, then the statement is an exaggeration, and the type that demotivates further reading, and I apologies if my review felt harsh in that sense.
Regarding existing works, you could add several other contributions on asynchrony in Byzantine robust ML such as (at least) :
"Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers" (ICML 2024)
"Robust collaborative learning with linear gradient overhead" (ICML 2023)
"Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants" (IEEE SRDS 2022)
"Collaborative learning in the jungle (decentralized, byzantine, heterogeneous, asynchronous and nonconvex learning)" (NeurIPS 2021)
"GARFIELD: System Support for Byzantine Machine Learning" (IEEE DSN 2021)
"Fault-Tolerance in Distributed Optimization: The Case of Redundancy" (ACM PODC 2020).
So yes probably not a hundred but at least a dozen, but again, one is enough not to make the bold and non-nuanced statement the paper have.
Most importantly, there seems to be a confusion in the paper and the rest of this discussion about what asynchrony means : unbounded communication delays, please refer to, e.g. Section 5 of the last reference (Fault-Tolerance in Distributed Optimization: The Case of Redundancy) about partial asynchrony.
Another aspect of this confusion appears in statements such as "allowing Byzantines to form a majority during certain rounds of communication" (cf comments on openreview). But in an asynchronous setting, adversaries can not only form a majority during a communication round, but the round could also consists *only* of adversaries (i.e. the Byzantine nodes constitute 100% of the updates in such a round). See e.g. "Distributed Algorithms" Lynch 1996.
I recognise that my score was biased by the repeated unpleasant experience of reading statements such as "there exists no other method" in modern ML papers, and will update it. I hope you now understand the serious problem in having such a statement, but also address the issues about what partial participation and asynchrony means in a distributed setting. For now, this confusion prevents me from assessing the real contribution of this paper.
---
Reply to Comment 1.1.1:
Title: Further clarifications
Comment: We thank the reviewer for contacting us. Below, we address further comments provided by the reviewer.
>*For the claim "literally, all existing methods etc." to be correct, there needs to be no single other work with provable Byzantine robustness allowing partial participation. If there exists just one, then the statement is an exaggeration, and the type that demotivates further reading, and I apologies if my review felt harsh in that sense.*
As we explained in our rebuttal, there is no work that addresses the same problem as we do in the same or comparable generality (without extra assumptions on some extra data or on the frequency filters). We will adjust the writing and add the discussion of extra related work -- it can be done easily and does not change the scientific essence of the paper. **Therefore, the mentioned writing issue is minor -- this should not be the reason for rejection (see NeurIPS 2024 Reviewers Guidelines https://neurips.cc/Conferences/2024/ReviewerGuidelines).**
>*Regarding existing works, you could add several other contributions on asynchrony in Byzantine robust ML such as (at least):*
We thank the reviewer for providing additional references.
- [1] does not consider partial participation.
- [2] requires all-to-all communication, i.e., it is not applicable to partial participation.
- [3] requires that the number of sampled clients is such that robust aggregation is possible, i.e., it is necessary for their theoretical results to have majority of honest workers at each communication round, while our work does not have such a requirement.
- [4] considers the setup when the number of participating clients has to be at least $C \cdot B$, where $C \geq 2$ and $B$ is the overall number of Byzantine workers, i.e., the majority of participating clients need to be honest.
- [5] proposes a library for Byzantine-Robust Machine Learning and does not provide theoretical results (in particular, it does not provide a theory for partial participation).
- [6] does not consider partial participation.
Overall, we would like to highlight that all of the mentioned works focus **on a different problem setup**, and the methods from the mentioned papers **are not guaranteed to converge in the setup we consider**. **Therefore, the mentioned works do not undermine the contribution, novelty, and significance of our paper.**
>*Most importantly, there seems to be a confusion in the paper and the rest of this discussion about what asynchrony means : unbounded communication delays, please refer to, e.g. Section 5 of the last reference (Fault-Tolerance in Distributed Optimization: The Case of Redundancy) about partial asynchrony.*
We understand this aspect, but as we already explained, asynchronous settings are quite different from synchronous settings with partial participation. Without additional assumptions on the extra data on the server or frequency filters, in the asynchronous regime, it is impossible to guarantee anything: even $1$ fast Byzantine clients can sequentially send multiple small updates that can shift the model arbitrarily far before good workers send their updates.
>*Another aspect of this confusion appears in statements such as "allowing Byzantines to form a majority during certain rounds of communication" (cf comments on openreview). But in an asynchronous setting, adversaries can not only form a majority during a communication round, but the round could also consists only of adversaries (i.e. the Byzantine nodes constitute 100% of the updates in such a round). See e.g. "Distributed Algorithms" Lynch 1996.*
As we explained above, without additional assumptions, asynchronous algorithms may fail in the presence of Byzantine nodes and **they cannot be directly applied to the problem we solve**. Moreover, we are not aware of any other paper (with asynchronous or synchronous communications) that shows **theoretical convergence guarantees in the setup we consider**.
**If the reviewer agrees with us, we kindly ask the reviewer to further improve the score.**
---
References:
[1] "Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers" (ICML 2024)
[2] "Robust collaborative learning with linear gradient overhead" (ICML 2023)
[3] "Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants" (IEEE SRDS 2022)
[4] "Collaborative learning in the jungle (decentralized, byzantine, heterogeneous, asynchronous and nonconvex learning)" (NeurIPS 2021)
[5] "GARFIELD: System Support for Byzantine Machine Learning" (IEEE DSN 2021)
[6] "Fault-Tolerance in Distributed Optimization: The Case of Redundancy" (ACM PODC 2020). | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback and time. Since several reviewers had concerns about the novelty, which we kindly but firmly disagree with, we prepared a general message addressing this.
As we mention in the introduction (page 2, left column, lines 49-54) and explain in Section 3 (paragraph “New ingredients: client sampling and clipping”), all existing methods (and Byz-VR-MARINA in particular) cannot be naively combined with client sampling/partial participation with an arbitrary small number of sampled clients: this can lead to communication rounds when Byzantine workers form a majority, which allows them to shift the updates arbitrary far from the solution. To handle such situations, we propose to use gradient clipping to the vectors that are communicated by the clients. As we explain in the same paragraph (page 6, lines 226-230), gradient clipping makes the updates bounded. Therefore, even if Byzantine workers accidentally form a majority during some rounds, the norm of the shift they can produce is bounded by a clipping level $\lambda_{k+1}$.
However, the introduction of the clipping does not come for free: if the clipping level is too small, clipping can create a noticeable bias to the updates. Because of this issue, existing works such as (Gorbunov et al., 2020; Zhang et al., 2020) use non-trivial policies for the choice of the clipping level, and the analysis in these works differs significantly from the existing analysis for the methods without clipping. The analysis of Byz-VR-MARINA is based on the unbiasedness of vectors $\mathcal{Q}(\hat \Delta_i(x^{k+1}, x^k))$, i.e., on the following identity: $\mathbb{E}[\mathcal{Q}(\hat \Delta_i(x^{k+1}, x^k)) \mid x^{k+1}, x^k] = \Delta_i(x^{k+1}, x^k) = \nabla f_i(x^{k+1}) - \nabla f_i(x^k)$. Since $\mathbb{E}[\text{clip}\_{\lambda\_{k+1}}(\mathcal{Q}(\hat \Delta_i(x^{k+1}, x^k))) \mid x^{k+1}, x^k] \neq \nabla f\_i(x^{k+1}) - \nabla f\_i(x^k)$ in general, to analyze Byz-VR-MARINA-PP we also use a special choice of the clipping level: $\lambda_{k+1} = \alpha_{k+1} \|\|x^{k+1} - x^k\|\|$. To illustrate the main reasons for that, let us consider the case of uncompressed communication ($\mathcal{Q}(x) \equiv x$). In this setup, for large enough $\alpha_{k+1}$ we have $\text{clip}\_{\lambda\_{k+1}}\hat \Delta\_i(x^{k+1}, x^k) = \hat \Delta\_i(x^{k+1}, x^k)$ for all $i\in \mathcal{G}$ (due to Assumption 2.6), which allows us using a similar proof to the one for Byz-VR-MARINA when good workers form a majority in a round. Moreover, when Byzantine workers form a majority, our choice of the clipping level allows us to bound the second moment of the shift from the Byzantine workers as $\sim \|\| x^{k+1} - x^k \|\|^2$ (see Lemmas D.9 and D.12), i.e., the second moment of the shift is of the same scale as the variance of $\lbrace g_i \rbrace_{i\in \mathcal{G}}$, which goes to zero (see page 5, lines 205-209). Next, to properly analyze these two situations, we overcame another technical challenge related to the estimation of the conditional expectations and probabilities of corresponding events (see Lemmas D.9 - D.10 and formulas for $p_G$ and $\mathcal{P}_{\mathcal{G}_C^k}$ at the beginning of Section 4). In particular, the derivation of formula (22) is quite non-standard for stochastic optimization literature: there are two sources of stochasticity – one comes from the sampling of clients and the other one comes from the sampling of stochastic gradients and compression. This leads to the estimation of variance of the average of the random number of random vectors, which is novel on its own. In addition, when the compression operator is used, the analysis becomes even more involved since one cannot directly apply the main property of unbiased compression (Definition 2.2), and we use Lemma D.6 in the proof to address this issue. It is also worth mentioning that in contrast to Byz-VR-MARINA, our method does not require full participation even with a small probability $p$. Instead, it is sufficient for Byz-VR-MARINA-PP to sample a large enough cohort of $\hat{C}$ clients with probability $p$ to ensure that Byzantine workers form a minority in such rounds. **Taking into account all of these multiple technical challenges that we circumvented, we believe that our choice of the clipping level is not obvious beforehand, and our analysis significantly differs from the analysis of Byz-VR-MARINA.**
Finally, we also want to emphasize that the idea of using gradient clipping to handle the Byzantine workers in the case of partial participation is novel on its own. Karimireddy et al. (2021) used clipping to construct robust aggregation, but it was never used in the way we apply it. We believe our work is an important step towards building more efficient Byzantine-robust methods supporting partial participation.
If our paper gets accepted, we will expand these clarifications in the main text (the accepted papers have one extra page).
Pdf: /pdf/d12bd86307124530d564313ac1f0e2ee2ccd99e0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This work considers the problem of Byzantine robustness in the framework of federated learning. The main contribution of this work is proposing and analyzing a novel federated algorithm, Byz-VR-MARINA-PP that utilizes gradient clipping. This algorithm is an extension of prior work, Byz-VR-MARINA, but importantly provides for the first time Byzantine robustness guarantees even in partial participation settings (even when Byzantine nodes could form a majority in some of the training rounds). The main idea of this method revolves around limiting the effects of Byzantine clients per round which is a consequence of the gradient clipping. As a result, Byz-VR-MARINA-PP is resilient to shift-back attacks. The authors further strengthen the proposed algorithm by incorporating communication compression. Theoretical results in terms of convergence guarantees are provided. The authors compare their results to SOTA works form the FL with Byzantine-workers literature and showcase the cost of obtaining Byzantine robustness in this more challenging regime. Additionally, numerical results showcase the merits of the proposed method.
Strengths: - The paper addresses a very interesting and relevant problem in the area of Federated Learning.
- The presentation of this work is very good and despite its theoretical depth it is easy to follow.
- The theoretical results derived are comparable to the ones previously known for the full participation regime (despite being somewhat weaker) and the authors are comparing and contrasting their results with prior literature in a fair and clear manner.
- Despite most of the tools used in this work are already known there has being a significant effort to combine them in an efficient manner and the derivation of the theoretical results is non-trivial.
Weaknesses: - As I mentioned before most of the tools used in this work are not novel and as a result the novelty of this work is somewhat limited.
- Although the main focus of this paper is theoretical it has to pointed out that the experimental results are derived in somewhat limited and less challenging settings i.e. a9a LIBSVM and MNIST datasets with 15 good and 5 Byzantine workers.
Technical Quality: 4
Clarity: 4
Questions for Authors: - In the main body of the paper the authors chose to use the more restrictive Assumption 2.5 whereas similar analysis has been performed in the Appendix with the less restrictive Assumption D.5. Could the authors please elaborate on their decision and the differences between the two results?
- On line 304 the authors mention that in some cases partial participation is beneficial to their algorithm. Could they please provide some intuition behind this observation?
- Although the theoretical results are convincing I believe that including more experimental results in broader regimes (of with a few more clients) would provide stronger evidence supporting the superiority of your method. This is not necessary but merely a suggestion.
- In the Introduction/Contributions both "Byz-VR-MARINA-PP" and "By-VR-MARINA-PP" are met. Is this a typo?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and time. We appreciate your positive evaluation of our work.
>**As I mentioned before most of the tools used in this work are not novel and as a result the novelty of this work is somewhat limited.**
We kindly ask the reviewer to check our general response.
>**Although the main focus of this paper is theoretical it has to pointed out that the experimental results are derived in somewhat limited and less challenging settings i.e. a9a LIBSVM and MNIST datasets with 15 good and 5 Byzantine workers.**
>**Although the theoretical results are convincing I believe that including more experimental results in broader regimes (of with a few more clients) would provide stronger evidence supporting the superiority of your method. This is not necessary but merely a suggestion.**
Thank you for your comment! As requested, we have included an experiment with a larger model and dataset, namely a heterogeneous split of CIFAR10 with ResNet18 with GroupNorm. The setup for the MNIST dataset is described in the paper.
Attached (see attached pdf in the main response), we provide a sample of these extra experiments, concretely [Shift Back + Coordinate-wise Mean] and [ALIE + Coordinate-wise Mean]. We note that the results are consistent with the ones provided in the paper, i.e., clipping performs on par or better than its variant without clipping, and no robust aggregator is able to withstand the shift-back attack without clipping. Finally, we are currently working on the experiments on even larger datasets and models, also with a larger number of clients.
>**In the main body of the paper the authors chose to use the more restrictive Assumption 2.5 whereas similar analysis has been performed in the Appendix with the less restrictive Assumption D.5. Could the authors please elaborate on their decision and the differences between the two results?**
Assumption D.5 is a direct generalization of Assumption 2.5 in the case of $B=0$. In the main part of the paper, we decided to consider a simplified version of the assumption to make it easier to read. The main message and the core ideas are still valid in the case of a simplified version of the assumption, but it allows us to present the result in a more compact and cleaner form. Also, we believe that obtaining convergence results for more complicated settings is important and we provide analysis for the more general case supplementary materials section. In contrast to Assumption 2.5, in Assumption D.5, instead of uniformly bounded heterogeneity, we have a bound that depends on the norm of the full gradient and constant, which makes this bound much more general.
>**On line 304 the authors mention that in some cases partial participation is beneficial to their algorithm. Could they please provide some intuition behind this observation?**
The key reason for this phenomenon is that regardless of the real number of participating clients, the usage of $(\delta,c)$-robust aggregator affects the final rate (i.e., stepsize) through the terms depending on parameter of the aggregator $\delta$ (e.g., see formula (7)). Therefore, in the situation described in lines 296-301, the rate depends on two terms: one is decreasing in $C$ (roughly speaking, it corresponds to the decrease of the variance of the stochastic gradient $\sim C$ times since the batchsize is increased $\geq C/2$ times for each round in the worst case) and the second one is independent of $C$ and depends only on $\delta$. Therefore, when the second term dominates the first one, it is optimal to decrease $C$ as long as the second term remains the main one and as long as $C \geq \max\{1, \delta_{\text{real}}n / \delta\}$. Such a strategy allows to save on overall computations and at the same time keeps the number of communication rounds the same.
Moreover, from the practical perspective, when the honest clients have similar data, it is natural that partial participation saves a lot of computation and this is exactly the setup of the experiment presented in Figure 1 (data was homogeneously split between workers). More precisely, the more honest workers participate, the less noisy the aggregated vector is (since the compute stochastic gradients independently; this can be seen from our bounds as well – the larger $C$ and $\widehat{C}$ are, the fewer *communication rounds* the method requires). However, with the increase of participating clients, the total number of stochastic gradient calculations grows, i.e., the overall batch growth. From the standard practice of usage SGD, we know that for many ML problems the best (in terms of computation complexity) batch size is rarely a full batch and very often one can achieve reasonable results with relatively small batch size. This is exactly what our Figure 1 shows.
>**In the Introduction/Contributions both "Byz-VR-MARINA-PP" and "By-VR-MARINA-PP" are met. Is this a typo?**
Thank you for noting this! It is indeed a typo, and "Byz-VR-MARINA-PP" is the correct option. We will fix this typo and also check for other typos in the text.
---
Rebuttal Comment 1.1:
Title: Post Rebuttal
Comment: I appreciate the authors efforts to address my question and include more experiments.
After carefully reading the comments from the other reviewers and the responses of the authors I find that all my concerns are addressed. I am inclined to keep my score and I look forward to the reviewers discussion.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Dear Reviewer,
Thank you for checking our responses and other reviews. We are glad to hear that we addressed all your concerns. We would also appreciate it if you could champion our paper in discussion with other reviewers: as the reviewer acknowledged and as we explain in the paper, **the problem we addressed is very important and non-trivial, and we managed to circumvent multiple technical challenges to achieve our results (as explained in the general rebuttal message)**.
Thank you once again for your feedback and time. If the paper gets accepted, we promise to incorporate all the requested changes and add extra experiments attached to the general rebuttal message, in particular, to the camera-ready version.
Best regards,
Authors | null | null | null | null | null | null |
Confusion-Resistant Federated Learning via Diffusion-Based Data Harmonization on Non-IID Data | Accept (poster) | Summary: This paper proposes an importance sampling method with a diffusion model to achieve data harmonization in federated learning with non-i.i.d. data. The proposed method utilizes the indicator function from self-paced learning to measure the reliability of loss on each client and calculates the optimal data distribution according to the indicator function. The proposed method is evaluated in five different datasets empirically.
Strengths: This paper proposes a novel method to measure the importance and difficulty of each data sample, and samples the local data with importance sampling to achieve lower model update divergence. Experiments are conducted on five different datasets, with a comparison to ten other baselines, sufficiently showing the superior performance of the proposed method.
Weaknesses: Despite the sufficient experimental results, the poor clarity of this paper weakens its soundness. There are several confusing points in the context of this paper:
1. It is unclear what the indicator function means and how it is derived. As the core of this paper, the indicator function is only provided with a formulation and never explained how it is derived. Additionally, it is not straightforward to understand the relationship between the indicator function and the optimal data distribution.
2. How the diffusion model is trained and used is not discussed in the paper. What we know from the paper is that the diffusion model takes the model embedding and the indicator function together to generate the optimal data distribution. It is weird why the diffusion model is necessary here. It seems that what we need is only a method that can estimate the optimal data distribution according to the indicator function.
3. The definitions of the indicator function in Eq. (1) and Eq. (16) seem to be inconsistent. The indicator function is calculated for each sample in Eq. (1), while it is calculated for each client in Eq. (16).
After all, the core idea of this paper is simple: re-assigning sampling probability to achieve overall balanced training data distribution such that the model update divergence can be mitigated. It is doubtful whether such a complicated framework is necessary to achieve this goal, while the improvement seems to be limited in the final converged accuracy.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. Why is the optimal uncertainty obtained by minimizing the indicator function?
2. How is the distribution decoder trained? Since the decoder should output an unknown distribution from the denoised latent representation, there should not be a ground truth $x_{t_i}$ when training the decoder.
3. Are the diffusion model, the model encoder, and the distribution decoder trained locally or globally? What is the training overhead of these components?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: See the weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1. Discussion on Indicator Function.
**Response:** We appreciate the reviewer's insightful comments. Below is a comprehensive explanation:
(1) **Motivation of the Indicator Function**. The Indicator Function $I_{\lambda}(l_i, \sigma_i)$ is designed to dynamically adjust sample weighting based on loss values and uncertainties, **inspired by self-paced learning** [1]. It follows curriculum learning principles, prioritizing simpler tasks before more complex ones, to improve training convergence efficiency. **The proof of this assertion can be found in global rebuttal**.
(2) **Derivation of the Indicator Function**. Our design is heuristic, similar to confidence-aware cross-entropy [2]. It consists of a loss-amplifying term and a regularization term, enhancing high-loss sample contributions while regularizing uncertainty estimation. To further discussion, see the convergence speed discussion of CRFed with FedAvg in the global rebuttal.
(3) **The Relationship Between the Indicator Function and the Optimal Data Distribution.** Please refer to the **response to Q1** for detailed explanation.
# W2. Necessity of diffusion model
**Response:** Thanks for your comment. In CRFed, the diffusion-based data harmonization process is integrated into each global aggregation round. This means the harmonization update occurs every time the global model is aggregated and broadcasted to the clients, specifically set to once per global round. The model is trained online during the federated learning process.
To address the reviewer's concern, we conducted comparative experiments with alternative methods to estimate the optimal data distribution based on the indicator function:
- **Simple Weighted Sampling (SWS)**: Directly adjusts sampling probabilities based on indicator function values without additional transformation.
- **Kernel Density Estimation (KDE)**: Uses KDE to smooth and adjust data distribution based on the indicator function.
The results, shown in **Table 8 on the one-page pdf**, indicate the diffusion model achieves the best performance. **The diffusion model's iterative denoising process effectively harmonizes data distributions**, ensuring robust and consistent alignment, crucial for improving convergence and performance.
From a practical perspective, CRFed harmonizes these data distributions, ensuring consistent updates and improved convergence. CRFed outperforms state-of-the-art methods and remains effective, as its training can be performed during the local phase(although running on server), ensuring real-time operation of the FL system.
# W3. Issue about Eq. (1) and Eq. (16)
**Response:** : We appreciate the reviewer's observation. The indicator function is defined at the client level but computed from the sample level. Specifically, Eq. (1) calculates the indicator function for each sample, while Eq. (16) aggregates it at the client level. There is no inconsistency between Eq. (1) and Eq. (16). We will improve clarity in future version.
# Q1. Why is the optimal uncertainty obtained by minimizing the indicator function?
**Response:** Thank you for your comment. Minimizing the indicator function is crucial for:
- **Balancing Loss and Uncertainty:** Minimizing $I_{\lambda}(l_i, \sigma_i)$ ensures higher importance for samples with higher loss values while controlling uncertainty.
- **Preventing Overfitting:** The regularization term $\lambda (\log \sigma_i)^2$ discourages high uncertainties, preventing overfitting to unreliable samples.
Additionally, minimizing the indicator function helps control the parameter update magnitude, as shown by:
$$
\left| \theta_{t+1} - \theta_t \right| = \eta \left| \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i \right| \leq \eta C \left| \nabla_{\theta} l_i \right|,
$$
where $\eta$ is the learning rate, and $C$ is a constant that bounds the term $\left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right)$. This inequality indicates that by choosing $\sigma_i^*$ to minimize the indicator function, we can control the magnitude of the parameter updates.
# Q2. Discussion on the distribution decoder
**Response:** Thank you for your comment. The role of the Distribution Decoder is essentially to map the latent representation $z_t$ to the distribution space, so the training is conducted offline on a known ground truth distribution. For a specific task, using the task's test set (or validation set) is a good choice. Other details can be found in section A.2.2 of the original text. We hope these explanations can address your concern.
# Q3. Details of model training
**Response:** Thank you for your comment. The diffusion model, the model encoder, and the distribution decoder are all trained globally. Specifically, the model encoder and the distribution decoder are trained offline.
To quantify the training overhead introduced by our approach, we have measured the wall-clock time per round and the total convergence time. The results are shown in **Table 2 on the one-page pdf**. We observe that the wall-clock time per round for CRFed is slightly higher compared to other methods due to the additional noise/denoise operations, but the total convergence time is comparable to other methods. The slight increase in computational cost is balanced by the improved model performance, making it a worthwhile trade-off.
## Reference
[1] Self-paced learning: An implicit regularization perspective. AAAI, 2017.
[2] Data parameters: A new family of parameters for learning a differentiable curriculum. Neurips, 2019.
---
Rebuttal Comment 1.1:
Comment: I appreciate the response from the author, which helps me better understand the methodology in the paper. However, I am still concerned about the motivation for using the diffusion model for data importance sampling. The additional computation and data required for training the diffusion model offline are still unclear. And the comparison with other importance sampling methods for FL, e.g., FedIR (Federated Visual Classification with Real-World Data Distribution) and Harmony (HARMONY: Heterogeneity-Aware Hierarchical Management for Federated Learning System), is absent. Accordingly, I would increase my score to 4.
---
Rebuttal 2:
Title: Further discussions on motivation and computational costs
Comment: Thank you for your timely feedback and for raising the score. We’ve provided the following response to address the remaining questions you raised:
### 1. Motivation for Diffusion Model
Previous importance sampling methods typically require prior analysis of the data relevance at each client-side [3,4] or necessitate deriving optimal sampling weights based on assumptions such as the convexity of the loss function [1,2]. While these methods offer strong theoretical guarantees, they are somewhat limited in their adaptability to real-world FL scenarios. For instance, both FedIR[3] and Harmony[4] assume that the server has knowledge of the local distributions of all clients. Although this assumption does not violate the privacy-preserving principles of FL, it can be challenging to obtain in real-world applications.
In contrast, the diffusion model based method we proposed does not depend on these assumptions. Instead, it iteratively adjusts the data distributions during the FL process itself. This enables the model to dynamically harmonize the diverse, non-IID data across clients **without requiring explicit distributional assumptions or centralized access to all client data distributions**. Guided by the indicator function, our CRFed can derive the optimal sampling strategy for each local node.
Moreover, as shown in the table below, empirical experiments demonstrate that **the diffusion model achieves superior performance, outperforming other benchmark methods**.
#### The performance of different importance sampling methods on CIFAR-100 under various β values.
| Method | β=0.1 | β=0.3 | β=0.5 |
|----------|-------|-------|-------|
| ISFedAvg | 0.232 | 0.285 | 0.305 |
| ISFL | 0.237 | 0.296 | 0.314 |
| FedIR | 0.258 | 0.311 | 0.352 |
| Harmony | 0.246 | 0.313 | 0.354 |
| CRFed | **0.280** | **0.345** | **0.389** |
It’s worth noting that this comparison isn’t entirely fair, as each importance sampling method operates under different assumptions. For example, ISFL requires a validation set to update the empirical gradient Lipschitz constants for each local model, while FedIR requires all clients to upload the conditional distribution of images given class labels that matches the target distribution. Nevertheless, our **CRFed outperforms the others even under less restrictive conditions**—unlike ISFedAvg and ISFL, it doesn’t require assumptions about the loss function or gradient variance, and unlike FedIR and Harmony, it doesn’t require centralized access to all client data distributions before calculating the importance sampling weights.
### 2. Additional Computation
We acknowledge that CRFed requires additional computational resources, primarily on the server side. However, in FL, the main computational burden typically lies in the local model training and communication. In practical FL systems, **increasing server-side computation to achieve performance gains is often desirable**, as it does not compromise the real-time operation of the FL system and tends to offer stronger economic benefits. For example, in CLIP2FL[5], to mitigate data heterogeneity and class imbalance, the server generates federated features based on client-uploaded gradients and uses CLIP's text encoder for prototype contrastive learning. Similarly, in FedMRUR[6], the server must compute compressed data received from clients, calculate the corresponding logits, and perform global knowledge matching, which involves substantial computational intensity.
Lastly, we would like to express our sincere gratitude to Reviewer CpsC. Your feedback has inspired us to delve deeper into discussions regarding CRFed, and the additional experiments have further strengthened its contributions. We plan to include these results and discussions in future versions. We hope this response addresses some of your concerns.^_^
### References
[1] **ISFedAvg**. Federated learning under importance sampling. IEEE Transactions on Signal Processing. 2022.
[2] **ISFL**. Federated Learning for Non-iid Data with Local Importance Sampling. IEEE Internet of Things Journal. 2024
[3] **FedIR**. Federated Visual Classification with Real-World Data Distribution. ECCV. 2020.
[4] **Harmony**. Heterogeneity-aware hierarchical management for federated learning system. MICRO. 2022.
[5] **CLIP2FL**. CLIP-Guided Federated Learning on Heterogeneity and Long-Tailed Data. AAAI. 2024.
[6] **FedMRUR**. Federated learning with manifold regularization and normalized update reaggregation. Neurips. 2023. | Summary: This paper presents a framework called CRFed to address the significant challenges posed by non-i.i.d. data in federated learning environments. This work introduces a diffusion-based data harmonization mechanism that effectively reduces disparities in data distributions across different nodes. Additionally, the paper proposes a confusion-resistant strategy that leverages an adaptive indicator function based on importance sampling.
Overall, this paper's writing is clear and easy to follow. The figures are well-drawn, allowing for a quick understanding of the research motivation and methodological design. The core contribution of this paper is the introduction of a diffusion-based data harmonization method to obtain valuable data distribution for global model aggregation, which is a very brave and innovative idea. Additionally, how to use synthetic data to enhance training effectiveness has long been an open question in the field of FL, and this work clearly provides a very promising approach. Therefore, I recommend that this paper be accepted.
Strengths: 1. The method described in the paper is presented very clearly, the formulas are well-expressed, and the charts are clear.
2. The diffusion-based data harmonization mechanism is especially creative. This method uses Gaussian noise injection and iterative denoising to gradually align local data distributions with a desired global distribution. This process is clearly explained in Equations (7) to (12) and greatly reduces the impact of data differences. The detailed explanation of the forward and reverse processes, as shown in Figure 2, highlights a smart and practical way to handle non-i.i.d. data.
3. The results in Tables 1 and 2 consistently show that CRFed outperforms other methods in both accuracy and convergence speed. The detailed comparison with various state-of-the-art methods under different $\beta$ values and edge node configurations highlights the practical applicability and scalability of the proposed framework.
Weaknesses: 1. Some notations could be more clearly defined. For instance, in the definition of the Indicator Function $ I_\lambda (l_i, \sigma_i) $, it's not immediately clear how $\tau$ (the confidence threshold) is dynamically adjusted or chosen. A more detailed explanation of how $\tau$ impacts the learning process and its optimal selection criteria would be beneficial.
2. The iterative denoising process might face scalability issues with very large datasets or a high number of iterations. The paper should evaluate the performance and efficiency of the denoising steps in such scenarios to understand the method’s scalability better.
3. It would be beneficial to include a brief discussion on future work in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have thoroughly outlined the limitations of their work as well as the potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1. Some notations could be more clearly defined.
**Response:** Thanks for pointing this out. In our framework, $\tau$ represents a confidence threshold that determines the difficulty level of samples based on their loss values. The term $(l_i - \tau) \sigma_i$ in the Indicator Function adjusts the weight of each sample based on its loss value $l_i$ relative to the threshold $\tau$. This adjustment mechanism prioritizes samples with higher loss values (more difficult samples) when $l_i > \tau$, giving them higher weights, while samples with lower loss values (easier samples) are given lower weights. The impact of $\tau$ on the learning process is twofold:
- By setting an appropriate $\tau$, the model can focus on learning from more difficult samples first, thereby implementing a form of curriculum learning. This helps the model to progressively handle more complex patterns in the data, leading to improved generalization and robustness.
- $\tau$ can be dynamically adjusted during the training process to reflect the model's evolving understanding of the data. Initially, $\tau$ can be set to a lower value to focus on easier samples, and as training progresses, $\tau$ can be increased to prioritize more difficult samples. This dynamic adjustment ensures that the model continually challenges itself, preventing stagnation and promoting continuous improvement.
# W2. Evaluation the performance and efficiency of the denoising steps.
**Response:** We appreciate the reviewer's concern regarding the scalability of the iterative denoising process, especially when dealing with very large datasets or a high number of iterations. To address this, we have conducted additional experiments to evaluate the performance and efficiency of the denoising steps under such scenarios.
We conducted experiments on the CIFAR-100 and NIPD datasets, varying the dataset size and the number of iterations in the denoising process. The key parameters evaluated include:
- Dataset sizes: Full CIFAR-100 (50,000 samples) and NIPD (80,000 samples)
- Number of denoising iterations: 10, 50, 100, 200
The results are shown in **Table 7 on the one-page pdf**. We can find that the performance improvements (in terms of accuracy for CIFAR-100 and mAP for NIPD) saturate as the number of iterations increases. These results demonstrate that while the iterative denoising process is effective, the benefits in performance diminish beyond a certain number of iterations. Therefore, for practical applications, we recommend limiting the number of denoising iterations to around 100, where a good balance between performance and efficiency is achieved. This approach ensures that the method remains scalable even for large datasets.
# W3. A brief discussion on future work.
**Response:** Thanks for the suggestion. Future work could explore more sophisticated noise injection techniques in the diffusion-based data harmonization process. For instance, adaptive noise schemes that consider the specific characteristics of local data distributions could potentially improve the alignment of data across clients. We will include relevant discussions in future version to inspire further work. | Summary: This paper introduces CRFed, a framework designed to handle the challenges of non-i.i.d. data in federated learning. By using a diffusion-based data harmonization mechanism and a confusion-resistant strategy, CRFed aims to reduce data distribution differences among participating nodes and improve model consistency. Extensive experiments show that CRFed significantly enhances accuracy and convergence speed compared to existing methods.
In my opinion, the diffusion-based data harmonization mechanism is an innovative approach to dealing with data distribution differences. This paper not only introduces new theoretical concepts but also validates them through comprehensive experiments, making it a significant contribution to the field.
Strengths: The paper exhibits several strengths that highlight its contributions and impact on the field of federated learning:
1. The introduction of a diffusion-based data harmonization mechanism is a fresh approach to tackling data distribution disparities. I think this idea has great potential to improve the stability of learning despite client heterogeneity. There's a mapping relationship between local and global data distributions, and using a diffusion model to capture this relationship is really innovative.
2.The paper conducts extensive experiments on various non-i.i.d. datasets, including MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and NIPD. Especially with NIPD, which is a very challenging dataset, I congratulate the authors for achieving impressive performance on it.
3.The framework's ability to handle an increasing number of edge nodes and its adaptive learning rate adjustment contribute to improved scalability and training efficiency. This makes CRFed a practical solution for real-world federated learning scenarios.
Weaknesses: There are some areas where the paper could be improved to enhance its clarity, robustness, and applicability:
1.The method involves several hyperparameters, like the variance of Gaussian noise, the regularization coefficient, and the dynamically adjusted confidence threshold. A more thorough sensitivity analysis of these parameters would make the paper more complete.
2.In cases where data distributions are extremely diverse or have intrinsic properties that are difficult to capture through these processes, the effectiveness of the harmonization might be limited.
3.While the method aims to enhance model consistency and reduce data disparities, the additional communication overhead introduced by the diffusion mechanism and importance sampling is not discussed.
4.Theorem 3.1 is valuable. However, during the derivation process, the purpose of equation (3) needs to be clearly explained.
5."I suggest highlighting the data showing performance advantages in Table 2 in bold.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations are discussed in the paper by the authors. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1. Sensitivity analysis.
**Response:** We appreciate the reviewer's insightful comment regarding the sensitivity analysis of our hyperparameters. To address this, we conducted additional experiments to analyze the sensitivity of the key hyperparameters in our CRFed framework, namely the variance of Gaussian noise ($\beta_t$), the regularization coefficient ($\lambda$), and the dynamically adjusted confidence threshold ($\tau$).
We performed the sensitivity analysis on the CIFAR-10 and CIFAR-100 datasets, which were also used in the original experiments. The default values for the hyperparameters in our original experiments were:
- Variance of Gaussian noise, $\beta_t$: 0.1
- Regularization coefficient, $\lambda$: 0.1
- Dynamically adjusted confidence threshold, $\tau$: dynamically adjusted starting from 0.5
We varied each hyperparameter while keeping the others fixed to their default values and measured the test accuracy. The results are summarized in **Table 3,4,5 on the on-page pdf**. The results indicate that our CRFed framework is relatively robust to variations in these hyperparameters.
# W2. Additional experiments addressing scenarios where data distributions are extremely diverse.
**Response:** We appreciate the reviewer's concern regarding the effectiveness of our harmonization mechanism in handling extremely diverse data distributions. To address this, we have conducted additional experiment to evaluate and demonstrate the robustness and adaptability of our CRFed framework under such challenging conditions.
To empirically validate the effectiveness of our approach under extreme data heterogeneity, we conducted additional experiments using the CIFAR-100 dataset with even smaller Dirichlet concentration parameters ($\beta$) to simulate highly imbalanced and diverse data distributions. Specifically, we set $\beta$ to 0.01 and 0.05 to create scenarios with extreme non-IID characteristics. The results are shown in **Table 6 on the one-page pdf** and demonstrate that CRFed framework significantly outperforms the baseline FedAvg method even under extreme data heterogeneity.
# W3. Discussion on additional overhead.
**Response:** Thank you for your comment. We have measured the wall-clock time to evaluate the computational overhead introduced by the proposed approach. Results are shown in **Table 2 on the one-page pdf**. We observe that the wall-clock time per round for CRFed is slightly higher compared to other methods due to the additional noise/denoise operations. However, the total convergence time is comparable to other methods. Although CRFed involves additional computations per round, the convergence in terms of accuracy and model robustness is achieved efficiently. The slight increase in computational cost is balanced by the improved model performance, making it a worthwhile trade-off.
# W4. Discussion on equation (3).
**Response:** We thank the reviewer for recognizing the value of Theorem 3.1. We acknowledge that the purpose of equation (3) in the derivation process may not have been sufficiently clear. The purpose of this equation is to define the indicator function $I_{\lambda}(l_i, \sigma_i)$, which measures the reliability of each sample based on its loss value $l_i$ and associated uncertainty $\sigma_i$. This function is critical for the self-paced learning mechanism in our framework, where samples are prioritized based on their difficulty and uncertainty.
# W5. Formatting issue.
**Response:** Thanks for the suggestion. In future versions, we will make the corresponding annotations.
---
Rebuttal Comment 1.1:
Comment: Thank you for offering such a thorough rebuttal!
Considering the performance improvements and the importance of the problem tackled in this paper, the current overhead is acceptable. Future research can further enhance this aspect.
Overall, I am pleased with this work and would like to raise my score to 7.
---
Reply to Comment 1.1.1:
Comment: We appreciate your recognition of our work's significance. Thank you again for your precious time and valuable suggestions. | Summary: The work proposes a new FL approach, called CRFed, for addressing data heterogeneity in FL settings. CRFed relies on a diffusion based approach for harmonizing clients data heterogeneity by performing data noise injection and iterative denoising, followed by a curriculum learning approach, which employs an indicator function to assign weights to training samples and indicate samples selection. The authors conduct of number of experiments across various domains and FL environments to showcase the promise of their approach.
Strengths: - A self-paced (curriculum) learning approach for each client based on samples' loss and uncentrainty.
- Thorough empirical evaluation against many existing methods in multiple federated environments.
- Ablation results show the importance of each proposed component (indicator function, diffusion mechanism, client selection)
Weaknesses: - Some of the applied methods seem very ad-hoc and further elaboration is needed on their selection.
- More information is needed on the update frequency of the harmonization approach and respective curriculum sequence of the clients' training samples.
- Lack of theoretical framework limits the contribution of this work.
Technical Quality: 2
Clarity: 2
Questions for Authors: Comments, textual corrections and typos:
- Is the definition of the indicator function and the use of Lambert function based on previous work? Please cite accordingly and explain why these formulas were used. Moreover, why there is a substraction of confidence from the loss value?, also variables' reported value range in the indicator function is not clear (i.e., range of $\sigma_i, \tau$).
- Why the adaptive learning rate is adjusted based on the clients' indicator function? this is not clear at all.
- How often is the noise/denoise operation performed? At the beginning of training, at every federation round or after r-rounds? If it is done post-initialization then have you measured the effect of the approach in terms of wall-clock time? How expensive is the proposed approach in terms of time convergence (not round)? It would be great if you could include these results as well.
- Figure 4's is not readable. What is the $\beta$ value used for the two domains? Moreover, have you created 100 partitions and sub-sampling clients at each round or consider all available clients at every round?
- Please fix your citations style and add missing space between the text and the reference throughout the paper.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1: Further elaboration on the selection of the methods
**Response:** Thanks for pointing this out. Below, we offer a detailed explanation and theoretical basis for the key methods employed in CRFed framework.
1. The Indicator Function $I_{\lambda}(l_i, \sigma_i)$ is designed to dynamically adjust sample weighting based on loss values and uncertainties, **inspired by self-paced learning** [1]. **The construction of the indicator function is grounded in the principles of curriculum learning**, which prioritizes simpler tasks before progressively addressing more complex ones. The design of the Indicator Function $I_{\lambda}(l_i, \sigma_i)$ is intended to improve training convergence efficiency. The proof of this assertion can be found in global rebuttal.
2. The diffusion-based data harmonization mechanism is the core to our approach, aiming to mitigate data distribution disparities. This method is based on the principles of denoising diffusion probabilistic models (DDPMs), which have demonstrated success in various generative tasks. The iterative process of noise addition and removal aligns data distributions effectively, reducing heterogeneity across clients. **Through the diffusion-based data harmonization mechanism, we can ensure that the data distributions of all clients gradually converge during the noise addition and removal processes**, thereby reducing the impact of data heterogeneity on model updates.
# W2: More information on the update frequency of the harmonization approach and respective curriculum sequence of the clients' training samples.
**Response:** Thanks for the suggestion. In the CRFed framework, the diffusion-based data harmonization process is integrated into each global aggregation round. This means the harmonization update occurs once per global round, ensuring timely adjustments to data distributions across clients.
- **Maximum global rounds ($T_G$):** 100
- **Local training cycles per global round ($E_l$):** 1
Our curriculum learning approach involves dynamically adjusting the sampling weights of clients' training samples based on their difficulty, as measured by the Indicator Function $I_{\lambda}(l_i, \sigma_i)$. This function is recalculated at each local training cycle to reflect the latest state of the global model. The curriculum sequence progresses from easier to more difficult samples, facilitating a self-paced learning paradigm.
# W3: Theoretical framework.
**Response:** Thanks for the suggestion. In the global rebuttal, we have added theoretical analysis.
# Q1:More discussion on the Indicator Function.
**Response:** We appreciate the reviewer's detailed comments. Our indicator function, inspired by confidence-aware cross-entropy [2], includes a loss-amplifying term and a regularization term to amplify high-loss samples' contributions while regularizing uncertainty estimation. The Lambert W function helps find the optimal uncertainty $\sigma_i^*$, ensuring difficult samples are appropriately weighted during training. Subtracting the confidence threshold $\tau$ from the loss value $l_i$ centers the loss values, making it easier to prioritize difficult samples (i.e., those with $l_i > \tau$). The uncertainty $\sigma_i$ ranges from $10^{-3}$ to $10$ and $\tau$ is dynamically adjusted based on the weighted average loss, initially set to 1.0 and updated every 10 rounds to reflect the dataset's evolving difficulty. We will include these details in future versions of the manuscript. Thank you for your valuable feedback.
# Q2: Discussion on the adaptive learning rate.
**Response:** Thank you for your comment. Adjusting the learning rate $\eta_i$ based on $I_{\lambda}(l_i, \sigma_i)$ ensures that clients with more challenging data receive higher learning rates, allowing significant updates. This balances the learning pace among clients, preventing dominance by any single client and ensuring equitable contributions. Higher indicator function values typically correspond to more difficult or uncertain data, leading to slower convergence with a uniform learning rate. Increasing the learning rate for these clients accelerates their learning, speeding up overall convergence.
Additional experiments (see **Table 1 on the one-page pdf**) show that models with adaptive learning rates based on the indicator function demonstrate faster convergence and higher final accuracy compared to fixed learning rates.
# Q3. The principle and details of noise/noise operation.
**Response:** Thank you for your comment.
The noise/denoise operation in the CRFed framework is performed at every global federation round. This ensures consistent alignment of data distributions across clients, maintaining effective model updates. After each global model aggregation, noise is added and the denoising process adjusts the data distributions before the next round of local training begins.
We have measured the wall-clock time to evaluate the computational overhead introduced by the proposed approach, results are shown in **Table 2 on the one-page pdf**. Although our system involves additional computations per round, he total convergence time is comparable.
# Q4. Discussion on the $\beta$ and experimental details.
**Response:** Thank you for your comment. In our experiments, we used the Dirichlet distribution to create non-IID data partitions. The $\beta$ value, which controls the degree of data heterogeneity, was set to 0.5 for both domains in the experiments presented in Figure 4. We have considered all available clients at every round of federated learning in our experiments. We hope these clarifications address your concerns.
# Q5. Formatting issue.
**Response:** Thank you for pointing out this. We will fix the citations style in future version.
# Reference
[1] Self-paced learning: An implicit regularization perspective. AAAI, 2017.
[2] Data parameters: A new family of parameters for learning a differentiable curriculum. Neurips, 2019.
---
Rebuttal Comment 1.1:
Comment: I truly appreciate the authors' thorough feedback and their efforts in addressing my concerns, particularly their detailed explanation of the design principles for the indicator function and the additional analysis regarding the update step size when comparing CRFed to FedAvg. Overall, I am satisfied with the authors' response and their attention to my concerns, and I am pleased to increase my rating to 6 (Weak Accept).
---
Reply to Comment 1.1.1:
Comment: Thanks again for your time and effort on reviewing our work! | Rebuttal 1:
Rebuttal: # Conclusion
We sincerely thank all the reviewers for their insightful and valuable comments! Overall, we are encouraged that they find the contributions of our work noteworthy and valuable. Here is a summary of the key points acknowledged by the reviewers:
- The proposed CRFed framework, including the diffusion-based data harmonization mechanism and confusion-resistant strategy, is well-received. (all reviewers)
- The theoretical foundations and detailed explanations provided for key methods are appreciated(Reviewers cbew and FLG7), although some areas required further clarification (Reviewers nZWV and CpsC).
- The empirical validation on multiple datasets and the comprehensive set of experiments, including sensitivity analysis and ablation studies, were considered robust. CRFed achieved SOTA results. (all reviewers)
- The scalability and efficiency of the proposed methods, as well as their potential impact on federated learning applications, were highlighted positively (Reviewers nZWV, cbew, and FLG7).
We have addressed the issues according to the reviews, which can be summarized as follows:
- We have provided a more thorough explanation and theoretical basis for the Indicator Function $I_{\lambda}(l_i, \sigma_i)$, including detailed derivations and convergence analysis. (**We include this in the global rebuttal**)
- Additional sensitivity analysis of key hyperparameters has been included, with results presented for the CIFAR-10 and CIFAR-100 datasets.( **Table 3,4,5 on the one-page pdf**)
- We conducted further experiments to evaluate the computational efficiency, robustness and effectiveness of CRFed.( **Table 1,2, 6,7,8 on the one-page pdf**)
- Minor corrections and improvements, such as fixing citation styles。
# Theoretical analysis
We have added relevant theories on the design of Indicator Function \(I_{\lambda}(l_i, \sigma_i)\)in CRFed, including convergence analysis and convergence speed analysis.
## Convergence
Consider a simplified federated learning framework where the global model parameters $\theta$ are updated at iteration $t$ as follows:
$$
\theta_{t+1} = \theta_t - \eta \sum_{i=1}^n \nabla_{\theta} I_{\lambda}(l_i, \sigma_i),
$$
where $\eta$ is the learning rate and $n$ is the number of clients. For simplicity, we consider a single client and expand $\nabla_{\theta} I_{\lambda}(l_i, \sigma_i)$:
$$
\nabla_{\theta} I_{\lambda}(l_i, \sigma_i) = \nabla_{\theta} \left( (l_i - \tau) \sigma_i + \lambda (\log \sigma_i)^2 \right).
$$
By the chain rule, we have:
$$
\nabla_{\theta} I_{\lambda}(l_i, \sigma_i) = \sigma_i \nabla_{\theta} l_i + (l_i - \tau) \nabla_{\theta} \sigma_i + 2\lambda \frac{\log \sigma_i}{\sigma_i} \nabla_{\theta} \sigma_i.
$$
Based on the definition of the optimal $\sigma_i^*$, $\nabla_{\theta} \sigma_i^*$ can be approximated as proportional to $\nabla_{\theta} l_i$, i.e.,
$$
\nabla_{\theta} \sigma_i^* \approx k \nabla_{\theta} l_i,
$$
where $k$ is a constant. Thus, $\nabla_{\theta} I_{\lambda}(l_i, \sigma_i^*)$ simplifies to:
$$
\nabla_{\theta} I_{\lambda}(l_i, \sigma_i^*) = \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i.
$$
Since $\sigma_i^*$ is obtained by minimizing the indicator function, we have:
$$
\sigma_i^* \approx \exp \left( W \left( \frac{-(l_i - \tau)}{2\lambda} \right) \right).
$$
Finally, the model update rule can be expressed as:
$$
\theta_{t+1} = \theta_t - \eta \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i.
$$
To prove convergence, we note that the step size of parameter updates is finite:
$$
\left| \theta_{t+1} - \theta_t \right| = \eta \left| \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i \right| \leq \eta C \left| \nabla_{\theta} l_i \right|,
$$
where $C$ is a constant. Thus, as long as the learning rate $\eta$ is appropriately chosen, the update step size will gradually decrease, ensuring the convergence of the model.
## Convergence speed
Next, we compare the convergence speed of our method with the FedAvg.
For FedAvg, the update rule is:
$$
\theta_{t+1} = \theta_t - \eta \frac{1}{n} \sum_{i=1}^n \nabla_{\theta} l_i,
$$
where the update step size is (**I'm not sure why the following formulas are not displaying correctly. I will correct this as soon as possible**):
$$
\left| \theta_{t+1} - \theta_t \right|_{\text{FedAvg}} = \eta \left| \frac{1}{n} \sum_{i=1}^n \nabla_{\theta} l_i \right|
$$
For our method with the indicator function, the update step size is:
$$
\left| \theta_{t+1} - \theta_t \right|_{I_{\lambda}} = \eta \left| \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i \right|.
$$
To demonstrate that our method converges faster or has a tighter bound, we analyze the total update step sizes over all clients.
For FedAvg:
$$
\sum_{i=1}^n \left| \theta_{t+1} - \theta_t \right|_{\text{FedAvg}} = \eta \sum_{i=1}^n \left| \nabla_{\theta} l_i \right|.
$$
For our method:
$$
\sum_{i=1}^n \left| \theta_{t+1} - \theta_t \right|_{I_{\lambda}} = \eta \sum_{i=1}^n \left| \left( \sigma_i^* + (l_i - \tau) k + 2\lambda \frac{\log \sigma_i^*}{\sigma_i^*} k \right) \nabla_{\theta} l_i \right|.
$$
When $\sigma_i^* = \exp \left( W \left( \frac{-(l_i - \tau)}{2\lambda} \right) \right)$, $\lambda \geq \frac{-(l_i - \tau)}{2e}$, and $\tau = l_{\max} - 2 \lambda \ln \left( \frac{1}{k} \right)$, we can ensure $\left| C_i \right| \leq 1$, thus:
$$
\left| C_i \nabla_{\theta} l_i \right| \leq \left| \nabla_{\theta} l_i \right|.
$$
Thus,
$$
\sum_{i=1}^n \left| C_i \nabla_{\theta} l_i \right| \leq \sum_{i=1}^n \left| \nabla_{\theta} l_i \right|.
$$
Therefore, the update step size for CRFed is less than that of FedAvg. We will provide the complete theoretical proof in final version.
---
Next, we address each reviewer's detailed concerns point by point. Thanks!
Pdf: /pdf/2ba9444c7177d3366ebd536a5da14b3ae804acf0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nimbus: Secure and Efficient Two-Party Inference for Transformers | Accept (poster) | Summary: This work proposes a new secure two-party computation (2PC) protocol called Nimbus for Transformer models to improve the efficiency and effectiveness of large matrix multiplication and non-linear layer approximation in Transformer inference. First, this work exploits client-side outer product and output compact to enhance layer multiplication. Second, input distributions are taken into consideration to better approximate GELU and exponential with lower-order piecewise polynomials. Comprehensive experiments and analyses are demonstrated in this work, and the results indicate Nimbus is effective and efficient.
Strengths: - Nimbus proposes client-side outer product (COP) and output compact with shift operation to reduce the computing and communication overhead.
- Nimbus considers the impact of input distribution to simplify the polynomial approximation of GELU and Softmax with lower-order piecewise polynomials and small rings.
- The authors provide a comprehensive discussion of Nimbus efficiency and feasibility, including client-side resources, asynchronous weight loading, free ring conversion, and more.
- This work presents comprehensive complexity analyses, protocol definitions, and evaluation experiments, making the results and conclusion convincing and practical.
- The evaluations prove that Nimbus significantly improves the performance and efficiency of secure 2PC for Transformer models, compared with existing works like BumbleBee, Iron, and BOLT.
Weaknesses: The major concerns are the accuracy and feasibility of estimating input distribution by sampling.
- The server samples a batch of data from the training dataset to estimate the input distribution. I notice the work lack details of this process. For example, what is the exact batch size of sampling? I think such hyper-parameter can influence the effectiveness and efficiency of sampling. Does Nimbus compute the results of all training data to get the distribution, or just sample a subset? The precise process of sampling is worth further mentioning.
- It seems that Nimbus assumes that the training data and test data share the same distribution, or at least the distributions are similar. What if the two distributions are more significantly skewed?
- The initial state of piecewise approximation polynomials is pre-defined empirically despite the precise split points optimized by equation (3). For example, the approximation for GELU is divided into three pieces, with the middle piece being a quadratic polynomial. However, what if the distributions of inputs differ in practical cases? Figure 4 illustrates the distribution of non-linear functions for one specific dataset. If there exists another dataset with a more uniform distribution, such pre-defined piecewise polynomials may result in unwanted inaccuracy.
- In some special cases, to further protect privacy, not only the inference stage is protected by secure 2PC, but the training dataset are also protected by secure training. In such case, the server may fail to estimate the input distributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the contents of the Weaknesses part.
Besides, there are some writing mistakes. For example, in Line 4 of Algorithm 1 in the appendix, it should be $\widetilde{c} - r(\theta)$ rather than $r(\theta) - \widetilde{c}$.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As mentioned above, the input distributions estimated by sampling when approximating non-linear functions may suffer from inaccuracy, and the description of the exact sampling process is not very clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your helpful comments, and address your concerns as follows. We also appreciate the reviewer's attentive reading for pointing out the typo in the Appendix. We will rectify this in the revised version.
# Q1: Details of the batch size for summarizing the input distribution
For all eight tasks in Table 2, we randomly sample sentences from the training dataset until the total token count reaches 512. This is based on the finding that the distribution of intermediate activations stabilizes when the sampled data exceeds 256 tokens. We have included figures illustrating this observation in Figure 2 of the **supplementary PDF** under the "global" response.
# Q2: What if distribution of the test data are significantly skewed from the training data
The distribution skewness between the training and test data indeed exists and causes the generalization error of the model's accuracy. Nimbus is also affected by this skewness. As shown in Table 2, the skewed data contributes to the accuracy loss in the third line. However, the skewness does not have a significant impact and can be recovered through lightweight fine-tuning. The skewness is less significant for DNN models since the accuracy of DNN models relies on the independent and identically distributed (i.i.d) assumption among the training and test datasets [1], and many techniques are proposed to guarantee this assumption. For example, normalizing the training and test data using the same statistics, and using layer normalization to make the model more robust to distribution changes. Pretraining Transformers requires a large amount of data so that the model is trained to map data to the same hidden space. Many model compression techniques also utilize this assumption to design quantization [2] or pruning [3] strategies. Nimbus also builds on such a common assumption.
[1] Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures, NIPS 2022
[2] Deja vu: Contextual sparsity for efficient llms at inference time, ICML 2023
[3] Smoothquant: Accurate and efficient post-training quantization for large language models, ICML 2023
# Q3: Generalization of the insight on activation distribution to other datasets
We supplement our study with additional experiments on the activation distribution (Figure 1 in the **supplementary PDF** of the global response) to demonstrate the generalization. Besides the observations provided in the paper, we verify the activation distribution on additional popular datasets. Our experiments include the BERT-base model on the MRPC dataset (for sequence classification), SQuAD dataset (for question & answering), and SWAG dataset (for multiple choice), along with the GPT-2 model on the Wikitext-2 dataset (for causal language modeling). We observe obvious non-uniform distributions across these datasets. Furthermore, these distributions exhibited similar patterns, indicating that the piece splitting strategy proposed in this paper can be directly applied to other datasets. The regular distribution of intermediate activations has also been verified by prior works as a widely applicable rule across various Transformer models and tasks [1,2], where it has been utilized for quantization and sparsity in [1,2].
Therefore, compared to the previous strategy of treating the input distribution as uniform, our distribution-aware fitting is expected to yield better fitting results. We also conduct accuracy experiments on these datasets. As the following table shows, Nimbus only has a minor impact on the accuracy.
| Method | BERT-base (MRPC) | BERT-base (SQuAD) | BERT-base (SWAG) | GPT2 (wikitext-2) |
|-------------|------------------|-------------------|------------------|--------------------|
| | F1 | F1 | accuracy | perplexity |
| FP baseline | 90.12 | 88.1 | 81.08 | 20.01 |
| Nimbus | 90.42 | 87.93 | 80.94 | 21.36 |
[1] Deja vu: Contextual sparsity for efficient llms at inference time, ICML 2023
[2] Smoothquant: Accurate and efficient post-training quantization for large language models, ICML 2023
# Q4: Estimation of input distribution when the training dataset is also secret for the server
This case poses a greater challenge for secure inference and is interesting to explore. In this scenario, the intermediate activations are invisible to the server so that he cannot directly estimate the distribution. The server can employ privacy-computing techniques such as MPC and differential privacy to estimate the activation distribution, which is feasible even though the training dataset is secret. As mentioned in question 1, our method only requires a small batch of data (e.g., 512 tokens in total) to estimate the distribution. Because the data volume is small and the estimation is a one-time setup task, the performance using a privacy-computing technique to estimate the activation distribution is acceptable. We believe that our solution will remain attractive for such a more challenging case. | Summary: This paper proposes Nimbus, a secure inference protocol for transformers in the 2pc setting. They propose distribution-aware nonlinear function approximation to use low-degree polynomials to compute GELU and softmax. They showed that their method can preserve accuracy and achieve efficient performance by comparing it with several baselines.
Strengths: - Improving the efficiency of the secure inference systems is an important and timely topic.
- Extensive experiments are conducted to demonstrate the efficiency and accuracy of the proposed system.
Weaknesses: - The nonlinear approximation is leaking private information.
- This proposed system might require huge storage overhead on the client side.
- There is incorrect information regarding BOLT (S&P'24) and BumbleBee (NDSS'24).
- There is a gap between the authors' reported results and BumbleBee's results in their paper, which needs further explanations.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The major reason that prevents me from advocating this paper is that the nonlinear function approximations actually leak sensitive information about the training input distribution, which is not desirable in a secure inference system. For instance, the approximations polynomials are available to both parties, such that the client can easily reverse engineering the training inputs' distribution. For the GELU function, you can save one comparison and multiplication in 2PC compared to Bumblebee and BOLT but at the cost of leaking private information.
- Additionally, the input ranges for different models, datasets, training parameters, and even different layers could be significantly different. Thus, you might need different approximations for different tasks, which will leak model information and is not friendly to use.
- In your system, the matrix multiplication is done on the client side, which introduces a large memory overhead on the client side as they need to load the encrypted model into the memory. The encrypted model could be hundreds of times larger than the plaintext model. Considering that the client could be a normal user, such an issue should be avoided in a 2PC inference system.
- I'm wondering why BumbleBee's performance is not as good as reported in their paper. It's about 5x faster compared to your reported numbers. Are you running their code correctly?
- In line 308, you mentioned that BOLT uses aggressive approximation for efficiency, which is not the case. MPCFormer indeed introduces significant modifications to the nonlinear functions, but I believe BOLT's approximations are accurate, and they should be comparable to your designs. Additionally, BOLT seems to be open-sourced as BumbleBee mentioned that they obtained the results by rerunning their code.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See my concerns in the Question section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback on our paper. We appreciate your insights and would like to provide more clarification.
# Q1, Q2: The leakage of the approximation polynomial
The secure polynomial evaluation does not allow the client to learn the polynomial coefficients and comparison thresholds, except that the coefficient on the highest degree term is known by the client to achieve slightly better efficiency. It is easy to eliminate the leakage of the high-degree coefficient by introducing an overhead of about 8%. See below for more details.
We use the GELU function as an example to show how securely computing a polynomial keeps the coefficients and thresholds secret from the client. We first discuss the coefficients. Consider the approximation polynomial $P^2(x)=b_0 x^2 + b_1 x + b_2$. The client and server hold an additive secret sharing on the input secret $x$ denoted by $\langle x \rangle = (x_c, x_s)$, and only the server knows $(b_0, b_1, b_2)$. To compute the additive secret sharing of $P^2(x)$, two parties first jointly compute $\langle A_1 \rangle = \Pi_{mul}(\langle b_0 \rangle \cdot \langle x \rangle)+b_1$, where $\Pi_{mul}$ is the secure multiplication technique in BumbleBee [29]. Then two parties jointly compute $\langle A_2 \rangle = \Pi_{mul}(\langle A_1 \rangle \cdot \langle x \rangle)+b_2$ as the sharing of $P^2(x)$. During the evaluation, $b_0$ is kept secret through secret sharing, while $b_1$ and $b_2$ can be added locally at the server side using the linear property of additive secret sharings. This ensures that $b_0, b_1, b_2$ remain secret from the client. Second, for the comparison thresholds. To compare $x$ with a secret threshold $T$ known only by the server, the server can locally compute the additive secret sharing $\langle x-T \rangle = \langle x \rangle - T$ using the linear property of additive secret sharings. Then, both parties execute a secure comparison protocol on the input $\langle x-T \rangle$ to obtain an additive secret sharing of the bit indicating whether $x-T$ is greater than zero or not. Thus, the threshold $T$ is kept secret.
In this paper, to achieve better performance, we choose to let the server send $b_0$ to the client. As illustrated in line 1 of Alg. 3, this can eliminate one call to $\Pi_{mul}(\langle b_0 \rangle \cdot \langle x \rangle)$ and instead compute $b_0 \cdot \langle x \rangle$ locally. This is a trade-off between security and efficiency. However, it is important to note that simply revealing the highest degree coefficient is not sufficient for the client to recover the input distribution. If the leakage is still a privacy concern, we can keep the highest-degree coefficient $b_0$ secret as the protocol in the above paragraph mentioned. We denote Nimbus* as the slightly modified protocol in the above paragraph that keeps $b_0$ secret and guarantees zero leakage on the polynomial. Following the same experimental setup, Nimbus* only increases the overhead by around 8% for secure computation of non-linear functions compared to Nimbus that reveals $b_0$. See the following table for the efficiency comparison, where the values are measured in seconds.
|Method| GLUE (LAN) | Softmax (LAN) | GLUE (WAN)| Softmax (WAN)|
|-|-|-|-|-|
|Bumblebee|1.17|2.23|3.36|9.01|
|Nimbus|0.28|0.56|1.02|3.07|
|Nimbus*|0.30|0.61|1.10|3.29|
# Q3: The memory concern of loading encrypted model into the memory
We discussed this concern in Section 3.3. The encrypted model weights are expanded to at least four times the size of the plaintext weights. However, our solution does not store all the ciphertexts on model weights in memory. Instead, we only load the ciphertexts for a limited number of layers into memory. This approach is based on the insight that secure inference is primarily bottlenecked by network communication. By overlapping the swapping of ciphertexts from local disk to memory with network communication, we add no extra running time to load a part of ciphertexts on model weights. See Section 3.3 for more details.
# Q4: Performance gap between the reported results and BumbleBee’s results in their paper
Our experiments are based on the open-source codes of BumbleBee and are expected to produce similar results to theirs. We have double-checked the experimental results reported in this paper against those reported in BumbleBee and find similar performance results. Note that while BumbleBee reports the performance of the whole model, we report the performance of each layer.
For example, we consider the experiment over a WAN, using almost identical settings as BumbleBee: a 400Mbps network bandwidth, the BERT-base model, and an input length of 128. In Table V of BumbleBee, they state that the 12-layer model takes 4.86 minutes, which translates to approximately 24.3 seconds per layer. This is very close to the 23 seconds per layer reported in our Table 6(b). Despite differences in hardware, the performance data from a concrete implementation is valid. We are happy to provide source codes for experimental reproduction.
# Q5: Comparison with BOLT's nonlinear approximation
We say that BOLT uses aggressive approximation, meaning that the numbers reported in BOLT show an accuracy loss, as shown in Table 2 of BOLT's paper. We use the term "aggressive" to differentiate it from the approximation with (almost) no accuracy loss used in Iron and BumbleBee. Indeed, BOLT's approximation has a much smaller impact on accuracy, compared to MPCFormer. We will clarify it, and instead say that BOLT has a larger accuracy loss than Iron, BumbleBee and Nimbus, in the updated version.
By the time we submitted the paper, BOLT had not released their codes. Therefore, we implemented their nonlinear approximation using the backend of SecretFlow and compared the performance in Appendix G.4. Here, we also include performance comparison using their official codes, which can be found in Table 1 of the **supplementary PDF** under the "global" response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. I have comments as follows:
Thanks for the clarifications on the function privacy. I think it should be clearly explained in the paper.
I still do not understand where are the savings from. In your GELU approximation, you need to evaluate 2 multiplications between secret shares and 2 comparisons, which should be the same as BOLT. Are the savings from the efficient crypto primitives used in Bumblebee?
Given that my major concern is addressed, I've slightly raised my score. I suggest the authors add a paragraph to discuss the function privacy in the revision.
---
Rebuttal 2:
Comment: We are pleased to hear that the rebuttal addresses your concerns and thank you for raising your score. We will clarify the security of the polynomial approximation protocol in the revision. We will also describe the zero-leakge version of the protocol, and provide the performance comparison.
For the question of where the savings come from, there are three advantages in theoretical to consider. Firstly, the number of truncations called in Nimbus is the same as the number of secure multiplications called, which is two. In contrast, BOLT calls secure multiplication twice but makes four truncation calls. This is because BOLT includes additional multiplication between the public value and secret. While the multiplication between the public value and secret can be done locally, the truncation of the result sharing cannot be saved. The truncation protocol used in BOLT requires $\log \ell+3$ rounds of communication [1], where $\ell$ represents the bit length of the ring. As a result, this increases the overall communication overhead. Secondly, our low-degree approximation reduces the fixed-point error during computation, allowing computation on a smaller ring. Since other layers still require computation on a higher ring, to eliminate the overhead of upcasting ring elements from a smaller ring to a larger ring, we also propose a truncation-upcast fusion protocol. In contrast, BOLT requires $\log m+2$ rounds of communication [1] for this upcast, where $m$ is the bit length of the smaller ring. Thirdly, all computations of the Nimbus are performed on the ring, while the BOLT evaluates the linear layer on the field and the nonlinear layer on the ring. As a result, an additional conversion between the field and the ring is needed. As for the implementation, Nimbus uses HE based secure multiplication [2], while the BOLT uses OT-based secure multiplication [1]. The former saves more communication, and performances better when the network condition is bad. The underlying OT protocol also causes difference. The SOTA Ferret OT [3] used in Nimbus takes less communication than the IKNP OT [4] used by BOLT.
[1] Sirnn: A math library for secure rnn inference, SP 2021
[2] BumbleBee: Secure Two-party Inference Framework for Large Transformers, NDSS 2021
[3] Ferret: Fast extension for correlated ot with small communication, CCS 2020
[4] Extending Oblivious Transfers Efficiently, CRYPTO 2003 | Summary: This paper provides a hybrid method that uses both HE and additive secret sharing (Add-SS) to perform 2PC privacy-preserving transformers. Two main contributions are discussed in this paper: (1) Client-side Outer Product Protocol and (2) Lower Degree Polynomial Approximation and Smaller Rings.
Strengths: The paper presents the complexity analysis and memory impact analysis well. These analyses help readers understand the advantages of the COP protocol.
Weaknesses: 1. Not quite sure why HE + Add-SS is used for 2PC; why not directly use multi-party computing for 2PC? Please compare the differences between these two techniques and related papers, as it is uncertain whether HE + Add-SS is a more promising technique for privacy-preserving transformers.
2. The Client-side Outer Product Protocol is one of the main contributions. However, there is no security proof for section 3.2 Client-side Outer Product Protocol, especially for the content from lines 165 to 170.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why HE + Add-SS is used for 2PC; why not directly use multi-party computing for 2PC?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No security proof for section 3.2 Client-side Outer Product Protocol
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your helpful comments, and would like to address the main concerns as follows.
# Q1: Why HE + Add-SS is used for secure two-party inference
HE combined with additive secret sharing (Add-SS) is one of the most promising techniques for secure two-party DNN inference. The technique of HE+Add-SS has been widely used in prior works such as Gazelle [17], Delphi [38], Cheetah [16], Iron [13], BOLT [33], and BumbleBee [29]. The rationale behind this is that HE allows the communication cost of securely computing linear layers to be independent of the model parameters. While other 2PC approaches such as GMW, Beaver, and garbled circuit (GC) require communication linear in the size of model parameters, the HE approach achieves significantly lower communication for linear layers. For non-linear layers, the 2PC protocol mainly adopts the Add-SS approach, which achieves the best efficiency for now. In particular, Add-SS, which is often combined with the oblivious transfer (OT) extension protocol, is particularly suitable for securely computing Boolean circuits, which are currently the most efficient circuit representation for non-linear functions. The Add-SS approach also obtains much lower communication than other approaches such as GC. We will clarify this in the updated version.
# Q2: Security proof of the client-side outer product protocol
We omit the security proof since the security of our client-side outer product protocol directly builds upon a secure homomorphic encryption (HE) scheme, and thus its security proof is somewhat straightforward. In particular, our protocol guarantees the same security as the traditional server-side inner product protocol. In the presence of a semi-honest adversary, we provide a brief proof idea below, and will include a formal proof in the Appendix.
Specifically, the model weights are encrypted by the server and then sent to the client, where the ciphertexts are denoted by Enc(W). From the CPA security property of the HE scheme, the ciphertexts reveal no information about these model weights. For secure matrix multiplication, the input matrix X has been shared as $(X_c, X_s)$ using Add-SS. The client samples a matrix of random shares $Y_c$, and then homomorphically computes a ciphertext $Enc(X_c * W-Y_c)=X_c * Enc(W)-Y_c$. Due to the circuit-privacy property of the HE scheme, in the client view, the ciphertext $Enc(X_c * W-Y_c)$ does not reveal information on $X_c$ and $Y_c$. The ciphertext $Enc(X_c * W-Y_c)$ is sent to the server, who decrypts it to a matrix of shares $T_s$ such that $Y_c$ and $T_s$ constitute additive sharings of $X_c * W$. Finally, the server can locally compute its shares $Y_s=T_s+X_s * W=X * W-Y_c$, and $(Y_c, Y_s)$ constitutes the additive sharings of $X * W$. It is natural to see that the local computation is secure. In the proof of security, the simulator can simulate the HE ciphertexts using "dummy" ciphertexts on zero, and the adversary's view between the real-world execution and ideal-world execution is proven to be computationally indistinguishable by reducing it to the CPA and circuit-privacy security of the HE scheme. | Summary: This submission proposed secure inference protocols for Transformer-based model, involving two
crucial components: HE-based linear operations and approximation-based no-linear operations.
Experiments were conducted to verify the feasibility of the proposed protocols and to compare the
performance with prior works on Transformer secure inference.
Strengths: The author focused on the critical part of designing efficient MPC protocols for Transformer, with
detailed analysis on drawbacks of prior works.
For the linear layer, the solution takes into consideration the different computation sources that
the client and server hold. As such, during the linear operations, the server side is responsible for
the heavy cryptographic operations.
For the non-linear layer, the protocol in the submission is based on the distribution of function input.
Concretely, low-degree approximation was used, for the selected range.
The experiment in the paper is comprehensive and complete. Results indicate that it outperforms the
baseline.
Weaknesses: 1. One of the insights of the non-linear construction is based on the non-uniformed distribution of the input. It is not quite convincing as insufficient explanation was provided, thus limiting the feasibility of the protocol. Is it applicable to all types of input datasets or just limited to certain categories?
2. The contributions and novelty of the non-linear design are not sufficiently highlighted. It can be
viewed as the variant of spline approximations with carefully selected coefficients, which is
commonly applied in prior works [1].
3. Truncation in Alg. 5 discards the high-order bits rather than low-order bits. However, commonly
it is the variant of the logical right shift [2]. Please clarify the definition of such an operation.
[1] Hou, Xiaoyang et al. “CipherGPT: Secure Two-Party GPT Inference.” IACR Cryptol. ePrint
Arch. 2023 (2023): 1147.
[2] D. Rathee et al., "SiRnn: A Math Library for Secure RNN Inference," 2021 IEEE Symposium
on Security and Privacy (SP), San Francisco, CA, USA, 2021, pp. 1003-1020,
Technical Quality: 2
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your insightful feedback and suggestion. We address the main concerns as follows.
# Q1: Generalization of insight on activation distribution to other datasets
We supplement our study with additional experiments on the activation distribution (Figure 1 in the **supplementary PDF** of the global response) to demonstrate the generalization. Besides the eight tasks in the GLUE benchmark provided in the paper, we verify the activation distribution on additional popular datasets. Our experiments include BERT-base on the MRPC dataset (for sequence classification), SQuAD dataset (for question & answering), and SWAG dataset (for multiple choice), GPT-2 on the Wikitext-2 dataset (for causal language modeling). We observe obvious non-uniform distributions across these datasets. These distributions also exhibit similar patterns, indicating that the piece splitting strategy proposed in this paper can even be directly applied to other datasets. The non-uniform distribution of intermediate activations has also been verified by other studies as a widely applicable rule across various Transformer models and tasks [1,2], which they have utilized for quantization and sparsity.
Therefore, compared to the previous strategy that treats the input distribution as uniform, our approach of fitting nonlinear functions according to the activation distribution is expected to yield better fitting results. We also conduct accuracy experiments on these datasets. As the following table shows, Nimbus only has a minor impact on the accuracy.
| Method | BERT-base (MRPC) | BERT-base (SQuAD) | BERT-base (SWAG) | GPT2 (wikitext-2) |
|-------------|------------------|-------------------|------------------|--------------------|
| | F1 | F1 | accuracy | perplexity |
| FP baseline | 90.12 | 88.1 | 81.08 | 20.01 |
| Nimbus | 90.42 | 87.93 | 80.94 | 21.36 |
[1] Deja vu: Contextual sparsity for efficient llms at inference time, ICML 2023
[2] Smoothquant: Accurate and efficient post-training quantization for large language models, ICML 2023
# Q2: Clarification of the contributions on nonlinear layers
Our work on nonlinear layers has two main contributions. First, we observe that the activation distribution of the transformer model is non-uniform. Based on this observation, we allocate the approximation budget based on the input distribution when fitting the nonlinear function. This is different from prior works that directly minimize the output difference between the original function and approximated polynomials, assuming a uniform input distribution. Second, we have found that our low-degree approximation allows for efficient small ring computation. To support this, we also propose a truncation-upcast fusion protocol to avoid the cost of ring upcast. We will explain our twofold contributions in more detail in a later revision.
# Q3: Clarification of truncation
Our truncation in Alg. 5 also discards the low-order bits. We give a detailed derivation of Alg. 5 in Equation (8) of the Appendix. You can check the last line of Equation (8), where the low-order bits of $\langle x \rangle$ are discarded by division by $2^s$. We will further clarify this in the updated version of Alg. 5. We are open to further discussion if there are any remaining questions. | Rebuttal 1:
Rebuttal: # Global response
Thank you for taking the time to review our work. Besides the separate response, we also include a **PDF** file under the global response that contains figures and a table related to the reviewers' comments. If you have any further questions or need more details, please don't hesitate to reach out. Your feedback is important to us, and we are ready to explain anything further. Thanks again for your comments!
Pdf: /pdf/37b18308f235b439159625818854d7397a0b6f64.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
L-TTA: Lightweight Test-Time Adaptation Using a Versatile Stem Layer | Accept (poster) | Summary: This paper introduces L-TTA, a novel lightweight approach to test-time adaptation (TTA) that focuses on the stem layer of deep neural networks. The method incorporates a Domain Embedding Layer (DEL) using discrete wavelet transforms and a Gaussian Channel Attention Layer (GCAL) to minimize stem-layer uncertainty rather than full-network entropy. By updating only the stem layer during adaptation, L-TTA significantly reduces memory and computational requirements compared to existing TTA methods.
Strengths: 1. The paper proposes a fundamentally new way to perform TTA, focusing on the stem layer and uncertainty minimization rather than entropy minimization across the full model. This enables the real-world application of lightweight TTA.
1. L-TTA achieves competitive performance on standard TTA benchmarks while using orders of magnitude less memory than prior methods.
Weaknesses: 1. **Limitations in continual adaptation from frequent resets**: The method relies on frequent resets (every 10 iterations) to maintain performance, especially for complex datasets like ImageNet-C. This frequent resetting could leverage the benefits of intensive warm-up rather than demonstrating true adaptation capabilities from GCAL and DEL. This raises questions about the method's stability and effectiveness for continual adaptation. A deeper analysis of different reset intervals and alternative stabilization techniques is needed to fully understand and potentially improve the method's capabilities for continual adaptation.
1. **Lack of statistical significance**. The paper does not report any error bars for the experimental results. It is straightforward to understand that the experiments are only run on a single seed (42).
1. **Incomplete and inconsistent experimental details**:
- CIFAR100-C results are excluded from the main experiment table.
- The comparison with EcoTTA, a relevant baseline for lightweight TTA, is limited to only the Cityscapes-C dataset.
- Only four weather corruptions were used in Cityscapes-C without any justifications.
- The small batch size experiments in Figure 5 lack comparison with SAR, a method specifically designed to address small batch size issues in TTA.
*Important*: Although the methodology is solid and brings a significant drive to lightweight TTA, I rate it a weak reject due to the abovementioned weaknesses. I would increase the score if the rebuttal addresses the issues.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please discuss more memory-efficient TTA methods such as MECTA [a].
1. Could the authors elaborate more on the sentence below from Section 3.3? Specifically:
> In recent TTA scenarios, the entropy minimization model is traditionally learned by filtering out data that is evaluated to have low entropy. However, contrary to this trend, [61] experimentally shows that HFC derived from input images helps generalize the model. This means that high entropy actually contributes significantly to improving prediction accuracy.
- How are high-frequency components (HFC) related to entropy in TTA?
- What evidence supports the claim that "high entropy contributes significantly to improving prediction accuracy"?
- How does this observation inform the design of L-TTA?
1. Given that L-TTA focuses on modifying the stem layer, how might this approach apply to test-time domain generalization scenarios that often require consideration of high-level information?
[a] Hong, Junyuan, et al. "Mecta: Memory-economic continual test-time model adaptation." 2023 International Conference on Learning Representations. 2023.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: 1. The proposed L-TTA appears to be primarily designed for vision applications. We would appreciate a discussion of broader applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for the depth of insight you have provided. We have carefully considered your review and have provided our detailed responses to each of your comments below. :)
**Weakness 1.**
>- Our proposal aims to adapt to an extremely constrained environment, *i.e.,* by backward only in the stem layer with minimal memory. In Appendix F: Limitations and Discussions, we acknowledge the limitations of continuously processing datasets with many classes, such as ImageNet-C, but we show that for small numbers of 100 or less, **such as Cityscapes-C, CIFAR10-C, and CIFAR100-C, the effectiveness of TTA can be maintained reliably without resetting. Moreover, it performs better than 10 iterations.**
**Weakness 2.**
>- We perform the experiments as shown in Table C for the 15 main corruptions by applying 10 additional seeds (3, 10, 21, 22, 43, 99, 318, 500, 565, 777) apart from the single seed (42) used in the experiment.
>- We report a mean of 64.85% (64.9% in Table 1) with a standard deviation (std) of 0.02.
>- **Table C**
>| Corruption | S_3 | S_10 | S_21 | S_22) | S_43 | S_99 | S_318 | S_500 | S_565 | S_777 | std. | Avg. (Err.) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|Gaussian_noise|78.28|78.33|78.32|78.34|78.23|78.27|78.37|78.29|78.30|78.30|0.04|78.30|
|Shot_noise|77.82|77.86|77.81|77.78|77.75|77.85|77.82|77.76|77.82|77.84|0.04|77.81|
|Impulse_noise|81.31|81.32|81.32|81.39|81.35|81.38|81.37|81.41|81.29|81.28|0.04|81.34|
|defocus_blur|75.26|75.31|75.19|75.28|75.21|75.20|75.19|75.23|75.17|75.14|0.05|75.22|
|glass_blur|81.87|81.77|81.82|81.88|81.80|81.79|81.83|81.81|81.77|81.76|0.04|81.81|
|motion_blur|73.08|73.01|72.98|73.03|72.94|73.04|72.90|73.02|73.00|72.97|0.05|73.00|
|zoom_blur|64.25|64.17|64.29|64.32|64.26|64.29|64.20|64.31|64.22|64.26|0.05|64.26|
|snow|68.98|68.96|68.94|68.88|68.91|68.90|68.86|68.97|68.97|68.99|0.04|68.94|
|frost|58.68|58.64|58.70|58.64|58.66|58.51|58.70|58.67|58.69|58.66|0.06|58.65|
|fog|54.57|54.57|54.51|54.56|54.54|54.43|54.50|54.50|54.56|54.51|0.04|54.52|
|brightness|33.36|33.34|33.35|33.31|33.36|33.34|33.39|33.34|33.38|33.36|0.02|33.35|
|contrast|73.36|73.31|73.29|73.49|73.28|73.35|73.38|73.34|73.40|73.28|0.07|73.35|
|elastic_transform|58.20|58.39|58.28|58.24|58.30|58.25|58.18|58.32|58.18|58.26|0.07|58.26|
|pixelate|40.16|40.15|40.11|40.20|40.21|40.09|40.12|40.13|40.14|40.12|0.04|40.14|
|jpeg_compression|53.79|53.86|53.84|53.84|53.94|53.86|53.75|53.84|53.85|53.72|0.06|53.83|
|Avg. (total corr.)|64.86|64.87|64.85|64.88|64.85|64.84|64.84|64.86|64.85|64.83|0.02|64.85|
**Weakness 3.**
>- **(1 & 2):** we show the experimental results, including CIFAR10-C and CIFAR100-C on ResNet50 with EcoTTA, as shown in Table 6 (see Appendix E). The experimental results are as follows. In this experimental environment, we achieve state-of-the-art performance in terms of both accuracy and memory usage.
>- **3:** This TTA study covers the trade-off between model prediction accuracy and memory usage. For fair comparison with the state-of-the-art research (i.e., EcoTTA) on the autonomous driving setting, we only use four weather corruptions.
>- **4:** In Table E below, we evaluate the four methodologies, including SAR, on the same ResNet50 with roughly the same parameters. SAR is experimented with using the official code on GitHub uploaded by the authors, using the provided dataset, and set to 'resnet50_bn_pytorch' mode by changing only the batch size.
>- **Table E**
> | Method | batch size (1) | batch size (2) | batch size (4) | batch size (8) |
|:---:|:---:|:---:|:---:|:---:|
|TENT|99.89|93.25|79.82|68.43|
|EATA|99.86|99.52|95.25|77.19|
|SAR|99.86|93.28|81.20|69.00|
|Ours|75.16|72.14|68.75|66.79|
**Question 1.**
>- **1:** MECTA has a similar training flow to EcoTTA, which involves forwarding all layers of the model and proposes a special Normalization layer to minimize memory usage (represented by cache). However, it relies on methodologies such as TENT and EATA to perform TTA. We solve the fundamental problem by performing forwarding only on the proposed and reconstructed stem layer as an independent TTA methodology and show compliant memory usage as shown in Table B.
>- **Table B**
> | Method | CIFAR10 | CIFAR100 | ImageNet-C |
|:---:|:---:|:---:|:---:|
|MECTA|130|130|397|
|EcoTTA|296|296|438|
|Ours|**6.4**|**6.4**|**26**|
**Question 2.**
>- **2-1:** In TTA, entropy refers to the confidence in the classification result of an input image. For example, when represented as a graph, high entropy indicates low confidence, which is the shape of a smooth graph. In [A], HFCs and LFCs are extracted from an image and then inferred separately, where HFCs are correct and take high entropy, but LFCs are incorrect and take low entropy.
>- **2-2:** In [A], it is experimentally shown that noise data (HFC), which is at a sufficient quality that the object cannot be identified through the Fourier transform, has better accuracy than object data (LFC) extracted from the same image.
>- **2-3:** In this work, we discuss the differences between the conventional TTA's arguments and those of [A] and extract the channel attention of the intermediate features obtained by decomposing the input into LFC and HFC with DWT into GCAL. As shown in Fig. 3(b), we show that the variation of the channel attention of HFC over LFC after performing the TTA is higher than that of L-TTA, indicating that the design basis of the latter argument is the same.
>- **Reference**
>- [A] High-frequency component helps explain the generalization of convolutional neural networks, *2020 CVPR.*
**Question 3.**
>- Domain Generalization uses multiple source domains for training and employs networks to generate adversarial domains. L-TTA's uncertainty minimization and frequency decomposition approaches are believed to enhance high-level information with these generative models. Table 1 shows that "Ours w/o GCAL" results indicate model generalization be feasible without TTA.
---
Rebuttal 2:
Comment: I appreciate the thorough rebuttal from the authors, especially experimenting on various seeds. A few concerns remain:
- Accuracy and memory comparisons with EcoTTA on ImageNet-C should also be included and discussed.
- Why only consider autonomous driving scenarios? Although it is an important application, I would appreciate the comparisons and discussions on more data types.
- Table E contains different error values compared to Figure 5 (in the manuscript). What made the difference?
---
Rebuttal 3:
Comment: We are thankful for your interest in the proposed L-TTA. In response to the concerns provided once again, we offer the following responses.
**Response 1**
>+ As shown in Table F below, it has just **0.25%** lower accuracy on ImageNet-C than EcoTTA, while using **16.84$\times$** less memory.
>+ Since it consumes much less memory than EcoTTA or the MECTA provided in the rebuttal, we expect the proposed L-TTA to be more applicable in battery-based edge devices.
>+ **Table F**
>| Method | Accuracy (%) | Mem (MB) |
|:---:|:---:|:---:|
| EcoTTA | **64.6** | 438 |
| Ours | 64.85 | **26** |
**Response 2**
>+ We’ve extended the scenario of L-TTA to the medical domain to satisfy concerns for applications other than autonomous driving.
>+ The extended scenario is aimed at adaptation to use the pre-trained model in other hospitals or institutions.
>+ The Camelyon17[A] dataset used in the experiment has the task of distinguishing whether a histopathological image is a tumor or not. The training set comes from three hospitals (=source domain), and the test set comes from another hospital (=target domain).
>+ As shown in Table G, we can see a progressive improvement in performance on the target domain as we perform L-TTA, and eventually achieve a **10.3%** improvement over the baseline.
>+ **Table G**
| Method | Target domain Acc. (%) |
|--|:---:|
| Baseline | 80.46 |
| Ours w/o TTA | 77.31 |
| Ours (10 iterations) | 89.72 |
| Ours (1 epoch) | **90.76** |
**Reference**
> [A] From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge, IEEE Transactions on Medical imaging, 2019.
**Response 3**
>+ **To clarify the results first, Table E (in our rebuttal) and Figure 5 (in the manuscript) provided are from the same data.**
>+ Unfortunately, the y-axis shown in Figure 5 has a typo: it is now written as [100 - 95 - 90 - 85 - 95 - 90 - 85], which is correct to be written as [100 - 95 - 90 - 85 - 80 - 75 - 70].
>+ We apologize for any confusion caused by our mistake.
---
Rebuttal 4:
Comment: I appreciate the responses, and here are my comments:
- Response 1: Why is EcoTTA accuracy lower than Ours? I understand it as a typo of writing in the opposite order.
- Response 2: I appreciate running new experiments but still have concerns about CityScapes-C.
- Response 3: Thank you for the clarification.
---
Rebuttal 5:
Comment: Thank you again for your continued comments. Our response is as follows:
**Re) Response 1**
>- We apologize for the confusion. The accuracy (%) reported in Table F refers to error (%), so it is correct that EcoTTA performs 0.25% better.
**Re) Response 2**
>- We have answered why we experimented with four corruptions and the concerns about other applications with our experiments.
>- Can you be more specific about the implications of the concerns that remain unresolved? Time is short, but we will actively work to address them.
---
Rebuttal Comment 5.1:
Comment: My original question was:
- Only four weather corruptions were used in Cityscapes-C without any justifications.
Therefore, I wonder if L-TTA would perform well in other corruption types or be somewhat specific towards weather corruptions.
---
Rebuttal 6:
Comment: Thank you for the clarification on your concerns.
Consistent with the experimental results for image classification, the proposed L-TTA can be effective for all corruptions in the semantic segmentation task by minimizing the uncertainty in the target domain.
We show the following results for the remaining corruptions (e.g., Noise, Blur, and Digital) among the 15-corruptions shown in the manuscript.
This results in an average mIoU (%) improvement of 4.81%, and we report the effectiveness of the adaptation for each corruption separately in the table.
> | Method | Gaussian.| Shot. | Impul. | Defoc. | Glass. | Motion. | Zoom. | Cont. | Elastic | Pixelate | JPEG | Avg. |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Ours (w/o TTA) | 36.98 | 53.60 | 20.40 | 74.01 | 66.43 | 71.71 | 18.74 | 74.48 | 52.91 | 74.00 | 62.94 | 55.11 |
| Ours (w/ TTA) | **46.65 (+9.67)** | **59.65 (+6.05)** | **43.61 (+23.21)** | **74.45 (+0.44)** | **69.26 (+2.83)** | **72.27 (+0.56)** | **24.74 (+6.00)** | **74.61 (+0.13)** | **53.17 (+0.26)** | **74.39 (+0.39)** | **66.27 (+3.33)** | **59.92 (+4.81)** |
* *Note that the experimental conditions are the same as in the manuscript.*
I really appreciate the ongoing discussion.
---
Rebuttal Comment 6.1:
Comment: I appreciate running another additional experiment. The result solved my concerns (I look forward to comparisons with other TTA baselines in the future draft).
I would raise the score to 5, acknowledging the novel and lightweight approach for lightweight TTA.
---
Reply to Comment 6.1.1:
Comment: We are glad that your concerns have been addressed and appreciate your valuable comments to help us improve the quality of the paper.
Best regards, | Summary: This paper proposes a novel test-time adaptation (TTA) method, L-TTA, that minimizes uncertainty instead of entropy. The method involves remodeling the stem layer of the network to minimize uncertainty, which significantly reduces memory overhead and enables rapid adaptation to the target domain. The stem layer applies a discrete wavelet transform to the input features to extract multi-frequency domains and minimize their individual uncertainties.
The paper presents a thorough evaluation across various tasks, demonstrating competitive results and significant reductions in memory overhead.
Strengths: 1. The idea is novel and interesting, with only the stem layer participating in the TTA process.
1. The paper provides a comprehensive evaluation of different tasks.
1. Existing experiments show competitive results, with significant reductions in memory overhead.
Weaknesses: 1. The method section of the paper is confusing, particularly the core components GCAL and DEL.
1. Why set the GT of $\gamma_\mu$ in GCAL to the maximum value of the sigmoid function (= 1)? This seems to encourage the SE block to output the scale parameter as 1, thus producing a trivial solution that keeps the input unchanged.
1. Moreover, the description of DEL is unclear. The authors state that DEL encapsulates GCAL and CONV layers with DWT and IDWT layers, but specific operators are not provided.
1. The organization of the experimental section is also somewhat confusing.
1. Table 1 lacks the results for CIFAR100 and the memory usage which is the core contribution of the paper.
1. Table 6 presents memory usage results on CIFAR10/100, but the network structure is altered to ResNet-50.
1. Figure 4 also presents memory usage results on CIFAR100/ImageNet, but only ResNet-50 results are shown in a bar chart.
1. There remains a gap between L-TTA and the baseline on ImageNet-C in Table 1.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. The authors seem to provide memory usage results only for the ResNet-50 backbone. This raises a concern about whether the memory usage performance of L-TTA is consistent across different backbones.
1. The costs of network training and inference are divided into spatial and temporal dimensions. Usually, there is a trade-off between memory usage and inference time. The authors claim to have achieved the fastest training, but no relevant experimental results are provided. If the authors still claim that L-TTA has advantages in training speed, experimental results should be provided.
1. As a suggestion for future work, the authors could consider modifying the initial layers of the network to achieve different trade-offs between memory usage and accuracy.
1. Parts (b) and (c) are missing in Fig. 2.
1. In Eq. 2, $p(\gamma_\mu, \gamma_\Sigma; \mu)$ denotes $\mu$ as a parameter, which is unconventional. It is suggested to change it to $p(\mu; \gamma_\mu, \gamma_\Sigma)$.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review on our manuscript. Please see our detailed responses to your comments outlined below. Please see our detailed responses to your comments outlined below.
**Weakness 1.**
>1. GCAL is designed based on the squeeze and excitation layer (SE Layer), where the output of a sigmoid function is defined as the channel attention ($=\mu$) of an intermediate feature. In L-TTA, learning to fix the output at $1$ means not using this feature.
Instead, it is designed to simultaneously extract the variance ($=uncertainty$) and minimize it with *Eq.4/Eq.5*. The idea is to pre-minimize the uncertainty for the source domain when performing the pre-training and then to obtain the difference in uncertainty under the same channel attention when the target domain comes in at test time.
>2. For simplicity, we define DEL as the structure in which DWT and IDWT encapsulate the current stem layer ($=7\times7$ CONV layer) and our designed GCAL. DWT and IDWT are not unique operations that we propose, and we illustrate them in the bottom right corner of *Figure 2*. The specific operations are described in detail in *Appendix B* with *Eqs. 7 and 8*.
**Weakness 2.**
>- **(1 & 2)**: While it would be nice to include comprehensive results to avoid confusion, as in Weakness 2-1 and 2-2, we focus on a fair accuracy comparison with the state-of-the-art performance (i.e., REALM) in Table 1 to show the trade-offs (ResNet26-CIFAR10 and ResNet50-ImageNet) effectively. In Figure 4, on the other hand, we have organized the manuscript to simply compare memory usage.
To take these concerns into account, we simultaneously compare accuracy and memory usage for CIFAR10-C/CIFAR100-C on ResNet50, the same model as ImageNet in Table 1, for comparison with EcoTTA in Table 6.
We may consider adding such a clarification to the manuscript.
>- **(3)**: In the manuscript, we show the accuracy and memory usage by using ResNet26 and ResNet50 backbone.
ResNet26 and ResNet50 both use the same Stem layer ($=7\times7$ CONV layer), so the memory usage depending on the model parameters is the same, and the only reason the values differ is based on the difference in batch size.
**Weakness 3.**
>- In this study, we propose that Ours' performance is significant when considering the trade-off between memory usage and accuracy at test time.
>- For example, ‘Ours’ performs **2.7%** worse than REALM in terms of accuracy but **98.3%** better in terms of memory (see *Figure 4*).
**Qeustion 1.**
>- We reconstruct and train only the stem layer ($=7\times7$ CONV layer) for TTA, so the same memory usage is measured on ResNet Series (Resnet-26, 101, 152, etc.) and advanced architectures such as ResNext[A], which have the same kernel size as the corresponding stem layer.
>- **Reference**
> - [A] Aggregated residual transformations for deep neural networks, *2017 CVPR.*
**Qeustion 2.**
>- To measure training time and FLOPs fairly, we record the average of five runs of each methodology using the profiler officially provided by the Pytorch framework used in our experiments. As shown in Table A, compared to TENT, training time and KFLOPs are improved by $4.26\times$ (**CPU**: $4.18\times$ / **GPU**:$11\times$) and $11428\times$, respectively.
> | Method | CPU (ms) | GPU (ms) | KFLOPs |
|:------:|:--------:|----------|:------:|
|TENT|1519.4|40.965|81922|
|Ours|362.83|3.7162|7.168|
**Qeustion 3.**
>- In the proposed architecture, the only parameter that can be considered for trade-off is the **reduction size** of the SE Block. For this purpose, we perform an ablation study in *Appendix D*. (See the table below)
> | Reduction | CIFAR-10-C | CIFAR-100-C | ImageNet-C |
|:---:|:---:|:---:|:---:|
| 4 | 18.0 | **37.5** | 67.4 |
| 8 | **17.9** | 39.2 | 67.2 |
| 16 | 19.2 | 38.2 | 66.5 |
| 32 | 18.0 | 39.4 | **64.9** |
**Qeustion 4. & Qeustion 5.**
>- It’s our mistake. We accept the reviewer's comments and will revise the manuscript to address Questions 4 and 5.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The performance of L-TTA in reducing memory usage is indeed impressive. However, after considering the other reviews and your replies, I have some remaining concerns:
- Idea: The core idea of L-TTA involves updating only the stem layer during forward propagation to reduce memory usage. Based on my experience, this approach may cause shifts that impact the performance of subsequent layers. Are there similar studies addressing this issue, or is this the first time it has been proposed in TTA?
- Method: I am still unclear about how the GCAL layer's output is utilized in TTA. The supervision target during the pretraining phase appears to lead to collapse with $\gamma_\mu=1$ and $\gamma_\sigma=0$. How does such training enable the network to gain output uncertainty?
- Experimental Section: The current experimental section requires major revision. The organization is unclear, and the main results should include all datasets for each group. A consistent backbone across different groups is also needed. Additionally, Table 1 should display the memory usage for each method, which is the core contribution of the paper.
Lastly, if L-TTA is claimed to provide the fastest training, as mentioned in the summary of contributions, a comparison with SOTA methods (e.g., REALM, EATA, EcoTTA) should be included.
---
Rebuttal 2:
Comment: We appreciate the reviewer's thoughtful and additional comments.
We used publicly available code from other methodologies to provide experimental results to demonstrate the reviewer's concerns, which is why our response was somewhat delayed.
**Our prepared responses are long, so we're splitting them into three official comments. **
**Response - Idea**
>1. We noted that in lines 51-53 of the Complaint, “Our research begins with the hypothesis that fine-tuning the first convolutional (CONV) layer, known as the stem layer, can significantly impact the TTA results. This is based on the understanding that domain shifts in input images affect model outcomes.” in the manuscript. We clarify this sentence in four sequential explanations below.
> - In short, we design the stem layer so that when pretrained on the source domain, all layers are encouraged to have the correct output for inputs with low uncertainty.
> - Obviously, the target domain has a potentially higher uncertainty due to the different data distribution (especially in the high frequency domain).
> - Based on these two facts, we minimize uncertainty about the unlabeled data in the target domain.
> - This process helps ensure that the performance of frozen subsequent layers can be leveraged like it was in the source domain.
>2. T3A[A] and LAME[B] are studies that update parameters for one layer, similar to our study. However, they only need to update the last layer, so they must perform a forward for all layers in the model.
> - According to our literature survey, this is the first time a proposal to update an initial convolutional neural network has been proposed.
> - T3A and LAME report their results for ResNet50 and ResNet18, respectively, and compare them to our results in **Tables J and K**, respectively, as shown below.
>- **Table J: T3A - ResNet50**
> | Method | CIFAR-10-C | CIFAR-100-C |
|:---:|:---:|:---:|
| Source | 29.15 | 60.34 |
| TENT | 14.27 | 40.72 |
| T3A | 15.44 | 42.72 |
| Ours | **14.1** | **36.7** |
>- **Table K: LAME - ResNet18**
> | Method | CIFAR-10-C | CIFAR-100-C |
|:---:|:---:|:---:|
| Source | 42.3 | 66.6 |
| TENT | 18.8 | 40.3 |
| LAME | 44.1 | 68.8 |
| Ours | **15.8** | **39.5** |
>- Reference
> - [A] Test-time classifier adjustment module for model-agnostic domain generalization, NeurIPS, 2021.
> - [B] Parameter-free online test-time adaptation, CVPR, 2022.
**Response - Method**
>- In GCAL, $\mu$ and $\sigma$ denote channel attention and uncertainty, respectively, and are extracted as different values at the same time, as shown in Figure 2, so there is no concern about conflicts. The trick of obtaining the mean and variance from the values obtained by passing through the activation function is often used in studies to address out-of-distribution.
>- Under the assumption of a Gaussian distribution for the single channel $x$ (= Intermediate feature) output from GCAL, $\mu$ and $\sigma$ are defined as the probability density functions as follows:
> - $PDF(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right).$
>- As a result, we are driven to minimize $\sigma$ with the loss term summarized below along with the PDF.
> - $NLL(\mu, \sigma) = -\log\left(\frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(\mu - \mu_{gt})^2}{2\sigma^2}\right)\right), where\ \mu_{gt}=1.$
>- Note that $\mu$ represents the mean for $x$.
*Please see additional official comments below:*
---
Rebuttal 3:
Comment: **Response - Experimental Section**
>- We fully understand the reviewer's (sFE2) concerns about the current experimental section. First of all, we would like to explain the reason for the organization of the current manuscript and the purpose of the experimental section and show additional experiments along with the dilemma for the proposed revision.
> - **The reason why Figure 4 shows the results for memory usage in isolation**
> - Most of the methodologies shown in Table 1, including REALM, perform test time adaptation using the Entropy Minimization (EM) strategy proposed in TENT. EcoTTA and the memory profiler we adopted, TinyTL, combine the model size and the size of the activation map, and since EM-based methodologies do not require the addition of an auxiliary network, the memory usage and backward-reliant training time are measured the same.
> - However, for DDA and EcoTTA, there is an auxiliary network in addition to the model, so we compare their results with the other methodologies in Tables L and M below, including training time and memory usage.
> - *Note that we used the official profiler provided by the Pytorch framework to fairly measure training time for all methodologies.*
> - *Important: Please note that we will add these tables to the manuscript in some form.*
> - **In Section 4 (Experimental Results), we summarize two main points we aim to demonstrate.**
> 1. Is it capable of delivering acceptable prediction accuracy compared to state-of-the-art (SOTA) methods that focus on accuracy?
> - To this effect, Table 1 compares REALM with existing SOTA methods, focusing on REALM. Note that these studies do not consider much about reducing memory.
> - Also, the experimental setups shown in Table 1, such as ResNet26-CIFAR10 and ResNet50-ImageNet, are conventional experiments in TTA that are carried over from previous studies.
> 2. Compare to state-of-the-art methods that focus on memory, how much memory can be reduced? (Figure 4)
> - For this purpose, Figure 4 shows a comparison with existing SOTA methods based on EM, including REALM.
> - **About the dilemma**
> - To report a fair comparison of memory usage in Table 1, we would need to include experimental results for EcoTTA. However, EcoTTA does not provide experimental results for ResNet26. Since different analytical perspectives can lead to different results, we would need the help of the authors to report on their methodology as a reliable study.
> - The reviewer's concern could be easily addressed if only the results for ResNet50 were required, as Table 1 and Table 6 in the Appendix E could be integrated. However, this is probably not a significant difference since most methodologies are based on EM , as illustrated in Figure 4.
> - One concern is that any sweeping changes to the intent of Section 4 would likely require agreement from other reviewers.
> - **More experiments and discussions for ResNet26**
> - Additionally, we experimented with CIFAR100-C and ImageNet-C on ResNet26 to show that they are comparable to other TTA methodologies and report the results in Tables O and P below.
*Please see additional official comments below:**
---
Rebuttal Comment 3.1:
Comment: Thank you for your detailed and thorough response. I will increase my score accordingly.
---
Reply to Comment 3.1.1:
Comment: We appreciate your choice to raise the score. Your reviews and comments on the submitted manuscript have been very helpful in improving the quality of our research.
Best regards,
---
Rebuttal 4:
Title: Tables
Comment: >- **TABLE L: ResNet50**
> |Method|CPU (ms)|GPU (ms)|KFLOPs|Memory (MB)|
|---|:---:|:---:|:---:|:---:|
|Entropy Minimization (TENT, EATA, MEMO, SFT, and REAML ) | 1519 | 41 | 81922 | 1486 |
|DDA (Diffusion baseline) |18182|7|16356737|365|
|EcoTTA|639|6|21156|296|
|Ours|**363**|**4**|**7**|**26**|
>- **Table M: ResNet26 (EcoTTA is not available now)**
> | Method | CPU (ms) | GPU (ms) | KFLOPs | Memory (MB) |
|---|:---:|:---:|:---:|:---:|
|Entropy Minimization (TENT, EATA, MEMO, SFT, and REAML ) | 935 | 39 | 81922 | 586 |
|DDA (Diffusion baseline)|18078|4.8|6595133|182|
|EcoTTA|N/A|N/A|N/A|N/A|
|Ours|**352**|**4**|**7**|**24**|
>- **Table O: ResNet26 / CIFAR100-C**
> | Method | Gauss. | Shot | Impul. | Defoc. | Glass | Moion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Source | 96.2 | 95.1 | 97.4 | 34.5 | 79.9 | 43.4 | 30.2 | 43.7 | 53.8 | 43.7 | 27.0 | 55.6 | 50.6 | 82.7 | 58.9 | 59.5 |
| Ours w/o GCAL | 88.1 | 85.2 | 81.4 | 33.3 | 83.0 | 38.8 | 28.2 | 47.2 | 53.4 | 49.3 | 26.8 | 65.2 | 51.9 | 81.1 | 52.9 | 57.7 |
| Ours w/o DEL | 67.5 | 64.3 | 65.2 | 31.6 | 63.6 | 33.7 | 28.2 | 40.5 | **40.7** | **38.4** | 24.8 | **27.8** | **45.0** | 38.9 | 52.9 | 44.2 |
| Ours | **63.5** | **60.6** | **62.7** | **30.3** | **63.0** | **33.9** | **27.7** | **39.9** | 40.8 | 41.0 | **24.6** | 29.0 | 46.4 | **36.1** | **51.1** | **43.4 (+16.1)** |
>- **Table P: ResNet26 / ImageNet-C**
> | Method | Gauss. | Shot | Impul. | Defoc. | Glass | Moion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Source | 99.8 | 99.6 | 99.8 | 89.3 | 93.0 | 90.7 | 79.8 | 88.7 | 83.2 | 82.9 | 49.5 | 98.4 | 89.5 | 87.8 | 75.3 | 87.2 |
| Ours w/o GCAL | 95.2 | 93.1 | 95.7 | 88.8 | 93.3 | 90.6 | 78.7 | 85.1 | 78.0 | 80.2 | 44.5 | 94.9 | 81.1 | 60.4 | 65.6 | 81.7 |
| Ours w/o DEL | 92.6 | 92.0 | **92.1** | 91.3 | 90.7 | 83.1 | 70.8 | **72.0** | 72.1 | **60.7** | **40.4** | 91.1 | 69.8 | 67.0 | 76.2 | 77.5 |
| Ours | **92.3** | **91.2** | 93.2 | **86.3** | **87.2** | **78.5** | **68.7** | 73.7 | **69.6** | 62.6 | 42.2 | **85.3** | **64.0** | **53.8** | **65.0** | **74.2 (+13.0)** | | Summary: In this paper, the authors enhance the efficiency of test-time adaptation (TTA) without necessitating forward/backward passes of the main model. To this end, they introduce a domain embedding layer consisting of a two-level discrete wavelet transformation, which extracts meaningful components in different frequencies. Then, based on the extracted frequency components, they estimate the channel-wise uncertainty with a squeeze and excitation module, which is minimized during testing for TTA. Experiments on benchmark datasets of corrupted images demonstrate the effectiveness of the proposed method. However, some concerns require further addressing. My detailed comments are as follows.
Strengths: 1. This paper introduces a lightweight TTA solution that only updates the first stem layer without necessitating the forward/backward passes of the main model. This broadens the applicability of TTA in practice, e.g., on resource-limited devices or in latency-sensitive scenarios.
2. The two-level discrete wavelet transformation provides information from multi-views without changing the shape of output features. This benefits its integration into existing models.
3. Experiments on benchmark datasets of corrupted images demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The proposed method necessitates altering model training for warm-up initialization, which incorporates dependency on the source dataset. This reliance restricts its applicability to the fully test-time adaptation setting, where source data is unavailable for data privacy consideration [A]. Moreover, this may make the model drift toward the source samples as discussed in [1].
2. The performance of the proposed methods may be hindered due to the neglect of the coupling between the shallow model and the deeper model. The authors solely update the stem layer with the local uncertainty in a gradient-isolated manner. However, such optimization does not guarantee the enhancement of the deeper model, and may easily lead to overfitting in a challenging dataset as evidenced in Figure 6. In essence, L-TTA may serve as an image augmentation technique, which enhances meaningful image features without considering the main model.
3. I’m concerned that the stem layer does not provide sufficient learning capacity. From SFT [B], shallow layers are ineffective in handling feature-level and output-level distribution shifts. As shown in Table 3, the proposed method offers a marginal improvement on ImageNet-Sketch and ImageNet-A with TTA.
4. The experiment results require careful review. For example, in Table 1, the performance of TENT [A] and EATA[B] is significantly lower than the results reported in the original paper. In Table 3, the performance of the source model on ImageNet-R is also inconsistent with the value reported in [B]. It’s advisable to provide a detailed configuration of hyperparameters for the compared methods in the appendix to ensure comparison fairness.
<=======================after rebuttal========================>
I appreciate the authors’ response, and most of my concerns have been addressed. Here, I still have the following suggestions:
1. Conduct ablation experiments to demonstrate the impact of the warm-up strategy and how the number of source samples used for warm-up affects the method’s performance.
2. Clearly discuss the application scope of the proposed method and the risk that the local uncertainty optimization might lead to an overfitted trivial solution, along with potential solutions to mitigate this issue.
Overall, I believe this paper introduces a promising new paradigm for addressing super lightweight TTA, which is in high demand in real-world applications, and the proposed local uncertainty minimization also offers new insights to the TTA community. By carefully reading other reviewer’s comments and the rebuttal, considering the paper’s current scores and it has addressed the concerns of Reviewer sFE2 and iLad, I increase my rating accordingly for a kind support. I highly encourage the authors to consider my suggestions above and include more discussions with recent works such as [1-4] if the paper gets accepted.
Lastly, please consider releasing the code if the paper is accepted, and I would like to try it out.
Technical Quality: 3
Clarity: 2
Questions for Authors: 5. Discussions with the related TTA methods can be enhanced, including [2-3] for test-time uncertainty reduction, and [4] for backpropagation-free TTA. It’s recommended that the authors include the discussions to ensure a more comprehensive study.
6. It appears counter-intuitive that BN_STAT [C] requires more memory than the proposed method in Figure 4. Since online TTA performs both adaptation and inference on the current test samples before processing the incoming ones, the cost of forward passes can be ignored when viewed as a fundamental procedure in standard inference.
7. The result of the proposed method under a small batch size is unpersuasive. From Figure 5, L-TTA with a batch size of 4 stills deteriorates compared with standard inference, which achieves an error rate of 82.0%.
8. It’s recommended to include comparisons on ViT-Base to verify the effectiveness of the proposed method on larger networks.
[A] Tent: Fully test-time adaptation by entropy minimization, ICLR 2021.
[B] Surgical fine-tuning improves adaptation to distribution shifts, arXiv 2022.
[C] Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift, arXiv 2020.
[1] FEATHER: Lifelong Test-Time Adaptation with Lightweight Adapters, arXiv 2024.
[2] Test-time Adaptation for Machine Translation Evaluation by Uncertainty Minimization, ACL 2023.
[3] Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting, arXiv 2024.
[4] Test-Time Model Adaptation with Only Forward Passes, ICML 2024.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed and there are no significant limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments on our work. We are grateful for your expertise and feedback with this much insight. :) We joyfully respond to each comment as shown below.
**Weakness-1.**
>- TENT also uses pre-trained weights; indeed, our methodology can be trained from scratch instead of warming up. The warm-up is just a time-saving trick. (Naturally, note that the warm-up also uses the same source dataset used in the pre-training phase).
**Weakness-2. & Weakness-3.**
>- As the concerns written in Weaknesses 2 and 3, L-TTA performs the optimization on a shallow layer, which is a limitation for challenging datasets with a large number of classes. Our research satisfies the needs of mobile environments, where memory usage due to training must be drastically minimized while still achieving results that can show acceptable accuracy.
**Weakness 4.**
>- In Table 1, TENT and EATA are performed under the same training conditions as REALM for a fair comparison with REALM, which is reported as the state-of-the-art performance in this manuscript.
>- In Table 3, we show the results of fine-tuning the pre-trained model (ImageNet-1K) provided by the Pytorch framework with the same training conditions as TTA on ImageNet-R. However, we note that the error shown by EATA is quite low and needs further verification with the official code provided by EATA.
>- To address the concerns in this comment, we can provide a detailed configuration in the appendix.
**Question 1.**
>- [A] focuses on language models, and the uncertainty described there is regarded as the same term as entropy in the entropy minimalization shown in the manuscript. In other words, uncertainty is estimated as a consequence of the classification result, and Monte Carlo Dropout is used for this purpose.
>- The uncertainty expressed in [B] focuses on calibration to improve the performance of the Fisher regularization proposed in EATA by reducing the uncertainty about the data through the whole network and sub-networks. Here, uncertainty is estimated as the discrepancy in the predicted labels between networks.
>- [C] is introduced as a methodology that assumes resource-constrained environments such as FPGAs and non-configurable parameters and is limited to vision transformer models. It also proposes an approach that focuses on the embedding layer similar to the proposed L-TTA, but adds an activation shifting module instead of learning to optimize the embedded token to the current model parameter.
>- Reference
+ [A] Test-time Adaptation for Machine Translation Evaluation by Uncertainty Minimization, *ACL 2023.*
+ [B] Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting, *arXiv 2024.*
+ [C] Test-Time Model Adaptation with Only Forward Passes, *ICML 2024.*
**Question 2.**
>- In our paper, even the forward pass is only performed on the stem layer, so for a more rigorous comparison, Fig. 4 shows the measured results of the overall memory usage, assuming the model is up in memory.
**Question 3.**
>- The y-axis shown in Figure 5 has a typo: it is now written as $[100 - 95 - 90 - 85 - 95 - 90 - 85]$, which is correct to be written as $[100 - 95 - 90 - 85 - 80 - 75 - 70]$.
Therefore, the results for batch size 4 in Table 4 show a better result than 82.0%. We provide the table as shown below to provide accurate data.
| Method | Batch Size (1) | Batch Size (2) | Batch Size (4) | Batch Size (8) |
|:------:|:--------------:|:--------------:|:--------------:|:--------------:|
| TENT| 99.89| 93.25| 79.82| 68.43|
| EATA| 99.86| 99.52| 95.25| 77.19|
| Ours|**75.16**|**72.14**|**68.75**|**66.79**|
**Question 4.**
>- We additionally provide experimental results for CIFAR10-C/CIFAR100-C on the Swin Transformer-Base model[D].
As can be seen in the table below, we can achieve $1.95\times$ and $2.95\times$ better results on each dataset by applying our proposed approach.
>- **CIFAR10-C**
> - | Method | Gauss. | Shot | Impul. | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |
|:-------------:|:------:|:-----:|:------:|:------:|:-----:|:------:|:----:|:----:|:-----:|:----:|:-----:|:------:|:-------:|:-----:|:-----:|:-----:|
|Baseline| 42.44 | 37.92 | 43.64 | 4.61 | 23.49 | 7.90 | 3.31 |**4.09**| 7.55 |**7.15**|**1.76**|**4.56**|9.46| 13.68 | 14.29 | 15.06 |
|Ours|**33.16**|**29.30**|**40.64**|**4.38**|**20.46**|**7.00**|**2.97**| 4.28 |**5.99**|8.57|1.93|5.01|**9.20**|**9.91**|**13.79**| **13.11** |
>- **CIFAR100-C**
> - | Method | Gauss. | Shot | Impul. | Defoc. | Glass | Motion | Zoom | Snow | Frost | Fog | Brit. | Contr. | Elastic | Pixel | JPEG | Avg. |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Baseline | 74.88 | 72.39 | 73.99 | 18.39 | 63.04 | 25.52 | 16.09 | 20.95 | 27.91 | **27.04** | **10.92** | **24.69** | 30.28 | 37.93 | 40.03 | 37.60 |
| Ours | **65.97** | **62.67** | **72.75** | **17.79** | **53.00** | **23.98** | **15.31** | **20.89** | **25.10** | 30.40 | 12.18 | 27.51 | **28.11** | **25.94** | **38.15** | **34.65** |
>- Reference
> - [D] Swin transformer: Hierarchical vision transformer using shifted windows, *2021 CVPR.* | Summary: The paper focuses on reducing the memory usage of test time adaptation (TTA) by remodelling the first layer. The authors apply discrete wavelet transform to input features and use squeeze and excitation blocks to get the per-channel uncertainty. The proposed loss function minimizes the -per-channel uncertainty. The memory usage of the proposed method is significantly less than methods that require full forward and backward passes of the backbone.
Strengths: - The paper addresses an practical problem for TTA.
- The proposed method significantly reduces the memory consumption.
Weaknesses: - **Gaussian Channel Attention Layer**
- The authors should explicitly show the loss function for pre-training and TTA for clarity. During pre-training, is equation 4 or 5 (the uncertainty loss) added to the pre-training loss?
- The authors should dedicate more effort to explaining why minimizing per-channel uncertainty can help TTA, which is more important than just the method itself.
- I can’t understand why the mean is set to 1 and why setting the mean to one can make SE block an uncertainty extractor.
- It seems this approach has nothing to do with attention. Could the author elaborate on the naming?
- Unclarity of presentation
- Where is DEL in Figure 2? Both the caption of Figure 2 and section 3.3 mentioned DEL in Figure 2.
- In the caption of Figure 2, it mentions *. Where is it?
- The authors should briefly explain the psi_k and LL/LN/HL/HH in caption 2.
- “We jointly train it to reduce task-specific prediction errors using suitable loss terms (*i.e.*, cross-entropy) in conjunction with the uncertainty extracted from GCAL minimized through NLL loss. “ Only cross-entropy is used or equation 4/5 is added as well?
- During TTA, only equation 5 is minimized?
- Equation 6, what’s the W tilde? The authors need to define variables before using them.
- What information does figure 3(b) try to convey? What does the high and low uncertainty difference mean for TTA?
- “the absence of a method to ensure data independence”. What does the data independence mean here?
- Since this paper is about efficient TTA, except for memory usage, it should also report, training time and flops.
- Grammar
- line37: “the possibility of adaptation when trained to minimize the entropy of predictions” missing subject.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please check the weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors should address the limitations more.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review on our manuscript. Please see our detailed responses to your comments outlined below.
**Weakness-1.**
>- Yes, please see lines 230-231 of the manuscript.
“*Eq. 4 means minimizing uncertainty for all channels, which applies equally to pretraining and TTA processes.*”
>- Our proposed GCAL serves to extract uncertainty (see Eqs. 2 and 3) instead of channel attention, which can be obtained by exploiting the squeeze and excitation layer (SE layer).
As described in the manuscript, learning the extracted channel attention ($=\mu$) to always converge to 1 is equivalent to not using channel attention.
Instead, since we can maintain the same $\mu$ in the source and target domains, we can identify gaps in the variance ($=uncertainty$) and minimize them through NLL loss.
This is demonstrated in *Figure 3 (b)* and *Eq.6.*
*Note that Figure 3 (b) is an $8\times8$ heatmap of the uncertainty of the intermediate feature, which is 64 ($8\times8$) channels.*
>- As mentioned above, the channel attention ($=importance$) extracted by the squeeze and excitation (SE) layer is disabled by fixing it to 1. The SE layer learns as it trains and obtains the ability to determine the channel attention of intermediate features, but we want to change its role to extract uncertainty.
>- It refers to the channel attention ($=importance$) extracted by the Squeeze and Excitation Layers, not the concept of attention in the Self-attention mechanism.
**Weakness-2.**
>- The structure of DEL encapsulates the GCAL and CONV layers with DWT and IDWT. In Figure 2, you can see that the DWT and IDWT encapsulate the GCAL and CONV layers.
>- In Figure 2, you can find the '$*$' located with $\psi_k$ within the yellow box (bottom right) describing DWT and IDWT.
>- Owing to the limited number of pages in the manuscript, the explanation of these notations is inevitably shortened, and we instead explain in the caption that they can be found in *Appendix B*.
>- Cross Entropy and Eq.5 are trained jointly while pretraining, and only Eq.5 is used while performing TTA.
>- Yes.
>- $W$ is the weight for the corresponding function $F_{se}$. It is used the same as in *Equation 1*.
>- The four $8\times8$ heatmaps show the difference in attention of 64 channels of intermediate features for the original domain (Source) and corruption (Target) for the same image. In the observations before TTA (left), we can see that LFC has a fundamentally large difference and HFC has almost no difference. However, in the post-TTA observations (right), we observe that LFC has nearly no difference from before TTA and HFC has a very large difference. This significant change in observations equates to a significant change in HFC's uncertainty about the target domain by performing TTA. This proves the claim that HFC can help the generalization process and provides more confidence in the uncertainty minimization process.
>- Data independence means that no matter what data from the target domain is input, TTA can be performed with reduced uncertainty. We present this as a contribution because existing TTA studies have used strategies to minimize entropy by filtering only data with low entropy.
**Weakness-3.**
>- To measure training time and FLOPs fairly, we record the average of five runs of each methodology using the profiler officially provided by the Pytorch framework used in our experiments. As shown in Table A, compared to TENT, training time and KFLOPs are improved by $4.26\times (CPU: 4.18\times \/ GPU: 11\times)$ and $11428\times$, respectively.
**Table A**
>| Method | CPU (ms) | GPU (ms) | KFLOPs |
|:------:|:--------:|----------|:------:|
| TENT | 1519.4 | 40.965 | 81922 |
| Ours | 362.83 | 3.7162 | 7.168 |
**Weakness-4.**
>- We reorganized the sentence with “The possibility of adaptation when the models are trained to minimize the entropy of predictions.” | Rebuttal 1:
Rebuttal: We are thankful to the reviewers for their thoughtful reviews. Our efforts to respond to your questions and comments have led us to identify improvements to be made in the manuscript, which we have incorporated in the final manuscript.
We have endeavored to provide detailed comment-by-comment responses to each reviewer's official review. Due to the lack of available space in individual responses, we refer to the Appendix for some of our answers.
Furthermore, to briefly summarize our paper once again, we focus on the utilization of the reconstructed stem layer for minimal effort to make the network robust in mobile environments. Therefore, in our experimental results, we focus the most on memory usage compared to prior works and aim to show acceptable prediction performance.
**Common issues**
- To simply summarize the question about the test time adaptation (TTA) process, the proposed TTA minimizes the uncertainty extracted from the gaussian channel attention layer (GCAL) only with Eq. 4 and Eq. 5.
- Eq. 4: TTA without DEL / Eq. 5: TTA with DEL
- The mean ($=\mu$) in the manuscript refers to the channel attention of the intermediate feature.
- Fixing it to 1 is intended to measure uncertainty at the same attention without distinguishing between source and target domains.
- In response to requests for clarification on limitations and scalability-related work, we provide this through further experimentation.
- Two reviewers asked for a discussion of similar papers and we provide a response.
- We only recorded about memory usage, but there are comments about training time and computation, so we provide the measurements in a table.
- We have only covered memory usage because it's the most critical metric that consistently impacts power consumption in edge devices.
- We have been receiving requests about the statistical significance (i.e. error bars) of our results, so we have performed additional experiments and report them in a table.
For additional comments, please use the 'Official Comment' button and we will respond as soon as possible. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An effective framework for estimating individualized treatment rules | Accept (poster) | Summary: This paper proposes a framework for estimating individualized treatment rules (ITRs) in precision medicine applications. The traditional methods for ITR estimation rely on inverse probability weighting (IPW) and L1-penalization, but these methods may have limitations such as statistical bias and computational bias. The proposed framework uses model-free distributional covariate balancing and a hard L1-ball constraint to address these issues. The optimal ITR is computed using projected gradient descent (PGD). The paper provides a comprehensive analysis of the framework, including convergence guarantees and statistical estimation guarantees. Simulation studies demonstrate that the proposed framework outperforms existing methods in terms of robustness and effectiveness for ITR learning.
Strengths: 1. The focus on weight optimization to enhance the efficiency of Individualized Treatment Rules (ITR) introduces a useful approach. The paper proposes an algorithm using Proximal Gradient Descent (PGD), providing a new angle on optimizing treatment protocols.
2. The paper is supported by a solid analytical framework, combining theoretical guarantees with simulation studies and real data analysis. This mix of theoretical and empirical evidence effectively demonstrates the method's reliability.
3. The research contributes to the field of personalized medicine by potentially improving the efficiency of treatment customization. Its impact may extend to influencing healthcare practices and patient management.
Weaknesses: 1. Previous papers like "Balanced policy evaluation and learning" from NIPS 2018 and "More efficient policy learning via optimal retargeting" in JASA 2021 also emphasized weight adjustments, focusing on minimizing the MSE of the value estimate. How does this paper differ from those approaches in its methodology and outcomes?
2. The author claims that using distributional covariate balancing weights rather than Inverse Probability Weighting (IPW) can reduce the finite sample bias in ITR-Learning. Can the author theoretically connect this reduction to the findings in Theorem 3.5?
3. This paper applies Proximal Gradient Descent (PGD) with constrained optimization to address the ITR problem. Is there a comparison available regarding computation speed between this method and others used in similar contexts?
4. The simulations describe two versions of the proposed method, labeled "penalized" and "constrained," as detailed in Appendix H.2. What are the main differences in tuning strategies between these versions? For a fair comparison, should the same tuning strategy be applied to both methods, or is it justifiable to use different strategies such as one focusing on MSE and the other on empirical value? Clarifying these differences could be crucial, especially if the paper aims to highlight the computational efficiency of the PGD over the penalization method.
5. Some typos: In equation (2), it should be $u_k$ rather than $u$?
6. The title of the paper with "multi-category trts" can be confusing. The paper focus on adjusting the weight for robustness and efficiency, rather than extending binary trts to multiple ones. The benefit of dealing with multiple trts may only come from Angle-based direct learning method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper provides a theoretical guarantee for the convergence of estimation parameters $B$. Does this focus on scenarios with fixed covariates $p$ and a fixed number of treatments $k$? Is it feasible to extend this to high-dimensional cases or scenarios where $p$ and
$k$ depend on the sample size $n$?
2. How does the paper estimate the treatment-free effect in the regression model? Could potential model misspecification for the treatment-free effect impact the performance of the ITR?
3. The paper mentions the use of an ad-hoc Bonferroni correction for variable selection. What is the accuracy of this method for ensuring variable selection consistency? In the NIPS 2022 paper "Learning Individualized Treatment Rules with Many Treatments: A Supervised Clustering Approach Using Adaptive Fusion," a group-lasso method is employed to filter out variables that only contribute to the treatment-free effect, thereby reducing the covariate dimension in $B$. Could further discussion and investigation into this method enhance the performance?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see it in above weakness and questions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > We sincerely thank the reviewer for the extremely insightful comments. Due to space constraints, we had to cut half of our initial response. We would be happy to have a more in-depth discussion.
>**W1:** We see that both of these works take a similar approach to ours by using weighting schemes and opting for directly optimizing the weights, rather than using estimated IPW. However, there are some crucial differences that we would like to point out.\
>[K18] has a general setup for policy evaluation and learning from historical data, where accurate policy evaluation through SAPE is needed for policy learning. The author proposes to optimize the weights directly by minimizing the worst-case posterior MSE. In contrast, our approach avoids the evaluation of policy effects. Instead, we estimate the optimal decision function directly through a weighted convex empirical loss minimization. Our approach introduces a simple spectral criterion for choosing the optimal weights: maximize the minimum eigenvalue of the weighted design matrix, which may be compared with the approach in [K18].\
>[K21] uses a similar approach to [K18] with the novel technique of "retargeting" the population covariate distribution. This idea has an interesting connection to our analysis. Under our generative model assumption, the true parameter $B_ *$ that parameterizes the optimal decision function is the global maximizer of the weighted log-likelihood function $E[w(X,A) \log \pi_B(Y|X,A)]$ for an *arbitrary* weighting function $w(X,A)$. So to minimize the statistical estimation error, we select the weight $w(X,A)$ such that the corresponding weighted Fisher information has the largest minimum eigenvalue. Despite this conceptual similarity, the retargeting weights in [K21] depend only on the covariates, whereas our weights also depend on the treatments.
>**W2:** We kindly ask you to refer to the global response.
>**W3:** Our proposed method takes 4.40 sec per dataset with p=60, n=1000 (glmnet for penalization takes average 2.01 sec). Other methods face scalability issues (12.02 sec for policytree and 46.75 sec for causalDML). We believe that computational efficiency is one of the advantages of our method.
>**W4:** We fully agree that the same hyperparameter tuning strategy is crucial for a fair comparison between the PGD and the penalized method. In the revised experiments, we have implemented the penalized method using unconstrained subgradient descent to minimize the L1-penalized ITR loss function and unified the hyperparameter selection across both methods by using the value function. In Figure 1 (author response PDF), we observe that the performance of L1-penalization is improved, but PGD still shows overall better performance.\
>To further clarify, we have included Figure 2 (author response PDF). This figure compares the optimization trajectories for the multiple L1-penalization parameters and the L1-ball sizes using the same stepsizes $\alpha/t$. For a large $\lambda_1=10$ in L1-penalization, necessary for a sparse solution, the optimization trajectory becomes highly fluctuating, and the algorithm fails to achieve low values for the true (unregularized) ITR objective. This is because the large L1-penalization significantly perturbs the true ITR objective. In contrast, PGD maintains smooth optimization trajectories and effectively minimizes the true objective across a wide range of L1-ball sizes. This illustrates a key advantage of the constrained method over the penalized method.
>**W5:** It is indeed $u$. The vector $u$ is a random treatment vector in $\mathbb{R}^K$, corresponding to the random treatment $A$. We will add the definition in the revision.
>**W6:** Since our framework is not restricted to binary treatments, *multi-category treatment* is included in the title. However, we appreciate your suggestion and will consider changing the title to better reflect the primary focus on effectiveness.
>**Q1:** A revised form of our Thm.3.5 shows that with high probability, the statistical estimation error is $O(\sqrt{p/n}/\mu)$. Thus, in the high-dimensional setting $p= \alpha n$, our bound on the statistical error is non-vanishing. As long as $\alpha = p/n$ is small and the strong convexity parameter $\mu$ stays bounded away from zero, our result shows that the statistical error is also small. However, we believe that a more tailored analysis in the high-dimensional setting (e.g., high-dimensional concentration inequalities) could improve our current error bound. There is some difficulty in the computational complexity of PGD since the largest eigenvalue of $\Psi$ can be order $p$. We are happy to discuss further.
>**Q2:** We use random forests to estimate the treatment-free effect. As the reviewer correctly points out, potential model misspecification can significantly impact the ITR performance. Additional simulations, case 2 and 4, in Sec H show this situation. To address this issue, we combine outcome augmentation with inverse variance weighting to reduce heteroscedastic errors, resulting in more accurate results.
>**Q3:** We considered two simulations with p=60, n=200: one with linear treatment-free/interaction effects involving four covariates, and the other with nonlinear effects involving two covariates. The goal of variable screening is to retain as many relevant variables as possible since we can rely on the PGD for a sparse solution. Thus, our primary interest is in the true positive rate (TPR), the proportion of true covariates correctly identified by screening. With 100 repetitions, for case 1, the mean(sd) TPR is 0.99(0.05), FPR is 0.151(0.05), and accuracy is 0.86(0.04). For case 2, the mean(sd) TPR is 0.995(0.05), FPR is 0.178(0.05), and accuracy is 0.83(0.05). Screening performs well, retaining 99% of true covariates with low FPR. We conjecture that group-lasso may further improve our proposed method by lowering the FPR. We will include this discussion in the revision.
---
Rebuttal 2:
Comment: Thank you for considering my comments in the rebuttal, particularly regarding the incorporation of distributional covariate balancing weights into the theoretical results (W2). Additionally, it would be beneficial to include a time comparison in W3 to demonstrate advantage of computation efficiency. Besides, variable selection is a crucial task, and exploring methods for selecting relevant variables would enhance the paper's robustness. While I agree that increasing recall is a priority, employing Group lasso (Q3) and could also potentially improve precision for selecting treatment-relevant variables. It would be great to explore relevant experiments with discussions.
Overall, this paper is technically solid, and I have accordingly adjusted my rating upwards.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your insightful and constructive suggestions. We have greatly appreciated the opportunity to discuss our work with you. We are pleased to hear that you found our paper technically solid and have adjusted your rating upwards accordingly.
We agree that incorporating distributional covariate balancing into our theoretical analysis as we discussed regarding Theorem 3.5 and including a time comparison to demonstrate the computational efficiency advantage would be beneficial to the readers. We will definitely include these points in the revision.
Regarding variable selection, we acknowledge the importance of this task especially in the high dimensional setting and we will incorporate discussions on methods for selecting treatment-relevant variables, not variables related to treatment-free effects to enhance the paper's robustness. We also appreciate your suggestion to explore Group Lasso for potentially improving precision in selecting treatment-relevant variables. We will elaborate discussion on these aspects in the revision.
Thank you again for your valuable feedback and suggestions. | Summary: This paper presents an approach for estimating individualized treatment rules (ITRs) for linear-in-feature decisions involving multi-category treatments. Unlike previous methods, the authors utilize distributional covariate balancing instead of inverse propensity weighting, and apply a combination of $L_1$ and $L_2$ penalties for regularization. The method is supported by rigorous computational and statistical analysis. Simulations on both synthetic and real datasets demonstrate its effectiveness, particularly in small sample, high-dimensional data settings.
Strengths: The paper is well written and proposes a more robust estimator for linear ITRs in small data settings by avoiding the use of inverse-propensity weighting, which can yield biased estimates in finite samples. The paper is introduces a somewhat novel approach in that utilizes distributional weighting and incorporates a hybrid $L_1$-$L_2$ regularization scheme. This approach is supported by rigorous theoretical analysis and thorough experimental evaluation.
Weaknesses: * The paper focuses mostly on linear decision rules for small, high-dimensional datasets which should be clearly stated somewhere in the abstract and emphasized in the introduction. Thus, the scope of the paper is more narrow than advertised.
* The approach is somewhat incremental and not necessarily novel as covariate balancing via distributional weighting has been used in treatment effect estimation tasks before. Furthermore, while the theoretical analysis is robust and valuable, it primarily mirrors that of linear regression with mixed $L_1$-$L_2$ regularization.
* The paper needs a thorough literature review and comparisons with literature on policy learning and treatment effect estimation (see, for e.g. [1]).
* Missing bias-variance tradeoff curves between the methods. The main appeal of this method is that it reduces the finite sample bias by not using IPW. It would be insightful to see how this reduction in bias plays out in the finite sample scenarios within the simulations.
* Missing baselines: Additional baselines should be considered. Eq. 2 suggests that using a linear regression model with k as a one-hot-encoded feature, along with regularization (either $L_1$ or a mixture of $L_1$ and $L_2$) applied solely to the non-k coefficients, could serve as a viable alternative.
Overall, I believe this paper is strong from a technical standpoint but has limited impact compared to existing literature. It is correct and technically meets the criteria for acceptance. However, I remain undecided and am open to revising my score based on further discussion.
[1] Nekipelov, Denis, Vira Semenova, and Vasilis Syrgkanis. "Regularised orthogonal machine learning for nonlinear semiparametric models." The Econometrics Journal 25.1 (2022): 233-255.
Technical Quality: 4
Clarity: 4
Questions for Authors: See points 4 and 5 above. See points 4 and 5 above. Could you provide a limited evaluation to demonstrate the improvements of your method (in both bias and accuracy) over the IPW method and the proposed outcome-based baseline?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors have aqequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Response for Weakness 1:**
>
>Thank you for your valuable feedback. We agree that clarifying the scope of our work is essential.
In this paper, we focus on linear decision boundaries, which are a standard approach in statistical literature due to their interpretability. One significant advantage of using linear decision classes is the convexity of the associated optimization problem. This convexity allows us to obtain statistical estimation guarantees, recovering the true parameter with high probability under the MLE framework of the generative model.
>
>While our current focus is on linear decision boundaries, it is essential to note that our framework can be extended. For example, extending to polynomial decision boundaries would allow us to obtain similar optimization analyses, including computational and statistical guarantees. Additionally, our framework can handle more complex, non-convex decision functions, such as neural networks. In such cases, the problem becomes a non-convex constrained optimization problem, meaning finding a global minimization is not feasible. However, we can guarantee the convergence to the first-order stationary point.
>
>This highlights the flexibility of our framework and we can demonstrate different computational and statistical guarantees depending on various function classes. Although this was not included in the current paper, we recognize the importance of this point and will provide further discussion in the revised version. We sincerely appreciate your valuable comments and feedback.
> **Response for Weakness 2:**
>
>We thank the reviewer for raising this point. We would like to highlight that the main innovation of our work is the significant performance improvement achieved by using an $L_1$-constraint, instead of the soft $L_1$-penalization (or mixed $L_1$ and $L_2$ regularization) commonly used in the statistical literature.
>
>We have theoretically justified this performance improvement. In the traditional statistical literature, incorporating hard constraints into the MLE framework was not feasible. However, we strengthened MLE analysis to enable statistical estimation even when the true parameter lies on the boundary of constraint. This is crucial because if the solution lies within the interior of $L_1$ ball, it implies that the solution is not sparse. For sparsity, the solution must lie on the boundary of $L_1$ ball. By combining probabilistic techniques with constrained optimization, we achieved significant improvements.
>
>Regarding the novelty (originality) of our work, we kindly ask you to refer to the global response. Unfortunately, we were unable to include all the details due to character limits. Thank you for your understanding.
> **Response for Weakness 3:**
>
>Thank you for your valuable feedback. We appreciate your suggestion to include a more thorough literature review and comparisons with existing work on policy learning and treatment effect estimation. In the revised version of our paper, we will incorporate a comprehensive review of relevant literature, including the work by Nekipelov, Semenova, and Syrgkanis (2022), which proposed a novel approach to finding sparse solutions based on the Lasso estimator in high-dimensional settings and demonstrated that the estimator converges at the oracle rate under mild conditions. We will provide detailed comparisons between our proposed methods and those discussed in the literature to highlight our contributions.
> **Response for Weakness 4:**
>
>Thank you for your insightful suggestion. The bias-variance tradeoff plot is given in the one-page author's response.
>
>Under Case 3, where the true optimal decision function is nonlinear and the covariates are 60-dimensional, we measure the reduction of bias using the difference between the true value functions of the optimal treatment decisions and the estimated treatment decisions, $V(d^\text{opt}) - V(\hat{d})$. The bias cannot be zero due to our model assuming a linear decision function.
>
>Except for the case with a sample size of 200, where additional optimization required for estimating energy balancing weights leads to an increase in bias, both models show a significant decrease in squared bias with only a minor increase in variance across both training and test sets. Notably, no tradeoff between bias and variance was observed for either model. The curves illustrate that using energy weights significantly reduces bias, which enhances the overall performance of the method. Thank you for your insight and valuable comment.
> **Response for Weakness 5:**
>
>Thank you for this very interesting suggestion. We have included your proposed baseline model in Figure 1, labeled as "Linear_Baseline" in the response PDF.
>
>This model performs excellently in the left panel, where the scenario mimics a randomized trial with a true optimal linear rule. This is expected because, in a randomized trial, covariates are well-balanced across treatment groups, allowing for an accurate comparison between treatment levels. Additionally, the baseline model can accurately capture the true linear decision function, which explains its strong performance.
>
>However, in the right panel, where the simulation mimics an observational study with a true optimal nonlinear rule, the baseline model shows poor performance. This is due to two reasons: firstly, the linear model requires balancing weights, such as IPW or energy weights, to reduce confounding effects present in observational studies. Secondly, it cannot fully capture the nonlinear decision function.
>
>We believe this interesting observation highlights the need for appropriate weighting in ITR-Learning, especially in observational studies and the importance of model flexibility. We plan to further investigate this suggestion as it provides valuable insights into the factors that significantly impact ITR-Learning. Thank you again for your insightful suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. For Figure 3, I was expecting to see the bias-variance tradeoff comparison between your model and the baselines (AD and AD+Energy). However, I only see the baselines on that plot. Am I misunderstanding Firgure 3 in the rebuttal pdf? Also, does the linear model you implemented for the comparison in Figure 1 include a variable screening step?
---
Reply to Comment 1.1.1:
Comment: >**Q4**: We apologize for misunderstanding your questions. We initially believed that a comparison between IPW and energy weights was requested to determine if energy weights reduce finite sample bias by avoiding IPW. Since we are unable to modify the figure at this stage, we have provided additional tables with our proposed method below:
>
### Training Set
| Method | Sample Size | Bias^2 | Variance | Total |
|-----------|-------------|--------|----------|-------|
| AD | 200 | 1.894 | 0.130 | 2.024 |
| AD+energy | 200 | 2.023 | 0.201 | 2.224 |
| Proposed | 200 | **0.141** | **0.122** | **0.263** |
| AD | 600 | 1.079 | 0.145 | 1.224 |
| AD+energy | 600 | 0.594 | 0.202 | 0.796 |
| Proposed | 600 | **0.028** | **0.048** | **0.076** |
| AD | 1000 | 0.639 | 0.069 | 0.708 |
| AD+energy | 1000 | 0.320 | 0.095 | 0.415 |
| Proposed | 1000 | **0.033** | **0.038** | **0.071** |
>
>
### Test Set
| Method | Sample Size | Bias^2 | Variance | Total |
|-----------|-------------|--------|----------|-------|
| AD | 200 | 1.914 | 0.078 | 1.992 |
| AD+energy | 200 | 2.069 | 0.127 | 2.196 |
| Proposed | 200 | **0.154** | **0.032** | **0.186** |
| AD | 600 | 1.100 | 0.083 | 1.183 |
| AD+energy | 600 | 0.580 | 0.135 | 0.715 |
| Proposed | 600 | **0.030** | **0.005** | **0.035** |
| AD | 1000 | 0.635 | 0.040 | 0.675 |
| AD+energy | 1000 | 0.329 | 0.070 | 0.399 |
| Proposed | 1000 | **0.034** | **0.009** | **0.043** |
>
> The tables show that our proposed method demonstrates the best performance in terms of squared bias and variance. It effectively reduces the bias between the true value functions of the optimal treatment decisions and the estimated treatment decisions without suffering from overfitting issues.
>
>Thank you for your insightful suggestions on measuring the effectiveness of our proposed method. We will include the revised figure with the proposed method in the revision.
>**Q5**:
> We didn't include a variable screening step for the linear baseline in Figure 1 (the top left $3\times 2$ plot in the author response), but here are the results with an additional variable screening step (denoted with "\_s"):
>
| Sample Size | Left: Linear_Baseline | Left: Linear_Baseline_s | Right: Linear_Baseline | Right : Linear_Baseline_s |
|-------------|------------------------------|---------------------------------|-------------------------------|----------------------------------|
| 200 | 0.893 | 0.915 | 0.297 | 0.319 |
| 600 | 0.948 | 0.961 | 0.319 | 0.338 |
| 1000 | 0.962 | 0.971 | 0.328 | 0.357 |
> The standard error is 0.001-0.005 for the left setting (first column in Figure 1) and 0.008-0.011 for the right setting (second column in Figure 1).
>
> With the inclusion of variable screening, the baseline model shows minor performance improvement; however, the overall performance pattern remains consistent. The baseline model (as well as the proposed method) performs best in the left subplot, representing a randomized trial with a true linear decision rule.
>
> In the right subplot of Figure 1, which represents an observational study, the addition of the variable screening step slightly enhances the performance of the linear baseline model. However, the overall performance of the baseline method remains limited and significantly worse than that of the proposed method. This is due to the inherent challenges of observational studies—such as covariate-dependent treatment and a true nonlinear decision rule—where balancing weights are essential for controlling confounding effects.
>
> We believe that the linear baseline that the reviewer suggested is an intuitive baseline method for ITR-Learning. We will include a discussion of this baseline in the revision.
>
> Thank you again for your insightful suggestions. Please let us know if there are any additional concerns we can address. | Summary: The paper presents a framework for estimating individualized treatment rules (ITRs) with multi-category treatments to address the problem of misspecified propensity score in inverse probability weighting (IPW) and the computation bias of L1 penalization. The authors propose using energy balancing weights (EBWs) for weighting and a hard L1-ball constraint to maintain objective smoothness, with projected gradient descent (PGD) for optimization. Theoretical results have been provided regarding the convergence rate of PGD and the estimated parameters in the linear ITR model. A simulation study and two real datasets are used to compare the proposed method with baseline methods.
Strengths: - The paper presents theoretical guarantees for the proposed method in terms of both the convergence rate of the computation algorithm and the statistical properties of the estimated ITR parameters.
- Extensive simulation and real data analysis have been conducted to test the proposed method.
Weaknesses: - Figure 1 only shows the average accuracy of different methods, although the standard deviations are reported in the appendix. It would be more informative to also plot the 95% confidence intervals to demonstrate whether the differences between methods are significant.
- Table 1 suggests that the empirical value function of the proposed method has a large standard deviation. The difference between the value function of the proposed method and other baseline methods is smaller than one standard deviation (except when the training size is 3000 or 5000 in the Email dataset), providing insufficient evidence to demonstrate the superiority of the proposed method.
- The tables in the appendix may overwhelm readers. Replacing them with figures that highlight the differences between methods could be more beneficial.
Technical Quality: 2
Clarity: 3
Questions for Authors: - In Figure 1, why is the “AD + Energy” line in the left column different from the “Proposed + Constrained” line in the middle column? Have the methods in the left column used additional techniques like outcome augmentation?
- It would be helpful for readers’ comprehension if the authors could add a high-level summary and a pointer to the appendix in Section 2.2, explaining how the EBW is constructed and used.
- The vector $\mathbf{u}$ in equation (2) has not been defined.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Response for Weakness 1:**
Thank you for the suggestion. We updated Figure 1 with an error bar with the standard error of the mean in the one-page author response.
> **Response for Weakness 2:**
Thank you for the insightful comment. Instead of reporting the standard deviation, we will report the standard error of the mean empirical value in the revised version, which is a convention in the machine learning literature (see the attached table in the one-page author response). Although some values still fall within the confidence interval, the overall mean empirical values of our method are higher than those of the baseline methods. The high standard deviation is mainly due to the variability of the dataset. In our analysis, only a small portion of the data out of 2500 was used for the training set, following the setting in [43], which mimics clinical trials (typically characterized by small sample sizes).
We want to emphasize the importance of computational efficiency in addition to empirical value. Our proposed methods are significantly faster than two popular methods: *policytree* and *causalDML*. Specifically, for analyzing one simulated dataset with 60 covariates and 1000 samples, our methods use only 9.4% of the computational time required by *causalDML* and 36.6% of the time needed by *policytree*.
We recognize that real data is complex, and it is unrealistic to expect a single method to outperform all others across every dataset. However, our extensive simulation studies clearly demonstrate the conditions under which our unified method performs well. We hope you also recognize the value of this comprehensive analysis.
> **Response for Weakness 3:**
We appreciate your valuable feedback. We will add figures to highlight the differences between methods.
> **Response for Question 1:**
Thank you for your question. In the left column of Figure 1, we used vanilla AD-Learning as a baseline to demonstrate that incorporating energy weights provides a performance gain in both randomized trial and observational study settings. To reduce possible confusion, we updated the middle column to compare the performance of each algorithm based on AD-Learning.
For your information, the "proposed" approach in the right column combines energy weights with additional techniques including variable screening, outcome augmentation, and inverse variance weighting. We explained it in lines 294-297, but we will further clarify it in the revised version. Thank you for your feedback.
> **Response for Question 2:**
Thank you for the comment. Energy balancing weights are derived from the concept of "weighted energy distance". These weights minimize the distance of the weighted empirical distributions of covariates across treatment groups, mitigating biases due to model misspecification of traditional inverse probability weighting (IPW) and resulting in more precise and reliable ITR estimation. We will include a high-level summary of section 2.2 to aid readers in understanding the key concepts.
> **Response for Question 3:**
Thank you for pointing that out. The vector $\mathbf{u}$ is a random treatment vector in $\mathbb{R}^{K}$, corresponding to the random treatment $A$. We will add the definition in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for considering my concerns and comments. I appreciate the detailed explanations provided regarding the variability of real data and the computational speed, which I found particularly helpful. Overall, I agree that this is a technically solid paper, and I have increased my rating.
However, the proposed method appears to be combinatorial compared to the existing literature. In this case, to enhance reader understanding, it would be beneficial if the authors could clarify the contribution of each component of the proposed method. Although additional simulation results have been reported in the appendix, the key message should be made clear in the main paper. For example, in the third plot of the observational study in the updated Figure 1, the "Proposed + Constrained" method significantly improves the accuracy of AD (from 30% to 78% when n=1000). Yet, the improvement attributed solely to the "constrained" optimization is relatively modest (from 62% to 78% when n=1000). Furthermore, the first plot (comparing AD with AD + energy) suggests that the contribution of energy balancing weights is only about 2%. This raises questions about which additional techniques (variable screening, outcome augmentation, or inverse variance weighting) are driving the significant improvement observed. If high-dimensional covariates are the primary factor leading to the poor performance of benchmarks, the current simulation setup may not be an entirely fair comparison. In this context, it would be prudent to consider other methods that incorporate variable screening.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score and for your constructive feedback. We are pleased that you found our explanations regarding data variability and computational speed helpful.
We acknowledge your point that the main paper should clarify the key message: how each technique we incorporate into a unified framework contributes to the observed performance improvements. We will ensure that the contribution of each component of our proposed method is clearly explained in the main text for the revised version.
Regarding your concern about the combinatorial nature of our method, we would like to emphasize that while our approach leverages existing techniques, the algorithm for finding a sparse solution and its theoretical guarantees are novel contributions to the ITR literature. We have provided a theoretical analysis demonstrating why this combined approach yields efficient solutions, and we hope the reviewer may reconsider the value of our theoretical contribution which is new to our work.
Thank you again for your time and effort in evaluating our work. We appreciate your insights and will incorporate your suggestions to enhance the clarity and impact of our paper.
---
Reply to Comment 1.1.2:
Comment: Thank you for your insightful comments.
We conducted a simulation with 60 covariates and a sample size of 1000, featuring nonlinear treatment interaction effects and treatment-free effects with two true signals. Given that the remaining 58 covariates are irrelevant to the ITR decision, a robust approach to estimating decision functions is crucial.
Here are the accuracy results over 100 repetitions:
| Method | Accuracy (mean ± se) |
|-----------------------|-----------------------|
| AD | 28.6 (0.004) |
| AD_e | 31.2 (0.007) |
| AD_s | 32.7 (0.008) |
| Proposed | 63.5 (0.014) |
The results indicate that individual components alone do not significantly improve performance. Instead, the synergistic effect of combining these components is essential for ITR learning.
In high-dimensional settings, using energy weights alone balances the empirical distributions of all 60 variables, including the 58 irrelevant ones. Additionally, estimated decision functions may include irrelevant variables, reducing the impact of energy weights.
Similarly, using variable screening alone remains challenging due to model misspecification or highly nonlinear outcomes. Combining other methods, such as outcome augmentation, helps mitigate model misspecification effects.
Given these synergistic effects, we believe our unified framework for ITR learning adds significant value to the existing literature. We will elaborate on this in the revised paper.
Thank you again for your valuable feedback. We hope this explanation further clarifies our contribution and encourages a more favorable view of our paper in your review recommendation.
---
Rebuttal 2:
Comment: Thank you for your response. I recommend refining the writing to clarify the positioning of this paper. For example, if I understand correctly, the authors claim that variable selection in problems with multi-category treatments has not been previously studied. If this is the case, then comparing against benchmarks without variable selection is reasonable. However, if the main contribution lies in covariate balancing and the L1 constraint (rather than the L1 penalty), the benchmarks should include some form of variable screening. Otherwise, it seems straightforward that benchmark methods without variable selection would perform poorly in high-dimensional settings. I will discuss this further with the other reviewers and the AC. | Summary: The paper proposes an algorithm for treatment rule estimation under standard no unmeansured confounding + SUTVA assumption. The algorithm builds on the AD-learning approach but changes to energy balancing weights and different regularizations. The paper then concludes with both simulations and real data example.
Strengths: The paper combines AD-learning with other useful techniques to improve treatment rule estimation and have theoretical guarantees.
Weaknesses: I think the main weakness comes from two aspects: presentation and originality.
Presentation:
1. There is no seperate related work section.
2. A lot of the main contexts (for example the algorithm 1, details on augmentation and inverse variance weighting) are put into the appendix. If the paper is too long to make it self-contained in 9 pages, I feel submitting to a journal is better suited.
3. The theoretical sections seem just there to have some resultsm. The convergence of the algorithm is not really algorithm 1 specific and the statistical guarantees rely on strong generative model assumptions.
4. Some detailed questions in the questions section.
Originality:
1. From my understanding (please correct me if I am wrong), all the crucial pieces of the algorithm are already in the literature and the paper simply just combines them together. For example, balancing weights, AD-learning all are not original.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Can you explain how you get the second "=" in equation (2)? Is it by assumption?
2. What is the definition of $u$ in equation (2)? It took me to the original AD-learning paper to figure out so you should be clear here.
3. On line 101, 'which inherently satisfies sum-to-zero constraint': I understand the $u_k$'s sum to zero but I thought sum-to-zero means $\delta_k(x)$?
4. In figure 1, are there error bars?
5. In figure 1 the leftmost subplot, it seems in RCT energy weights improve more, which seems counterintuitive. And also, would it make sense to also compare observational + linear with RCT + linear just to see whether the weights help? Similarly we can also compare observational + linear vs observational non-linear.
6. In table 1, do you know why the SE decreases then increases again?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Addressed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Weakness 1,2:** Thank you for the feedback. Our work views the ITR framework as a weighted convex optimization problem, focusing on using robust weights and the PGD algorithm to find sparse solutions effectively, a novel approach in this context. We believe that these contributions are significant for the NeurIPS community.\
Due to the comprehensive analysis and detailed presentation required, some details including related work were placed in the appendix to meet the page limit. In the revision, we will include a related work section in the main text and aim to make the paper more self-contained while emphasizing our key contributions.
>**Weakness 3:** We appreciate your comments.\
>First, our computational guarantee of the proposed PGD algorithm does rely on a general convergence analysis for PGD (Lem.D.2). However, when applying such results to the weighted convex optimization problems that reformulate the ITR-Learning problems, our analysis reveals that the convergence rate of PGD depends crucially on the eigenvalues of the weighted design matrix $\boldsymbol{\Psi}$ in eq.8, a new concept introduced in this work. Our analysis shows that the Hessian of the ITR-Learning problem is given by $\boldsymbol{\Psi}$. Thus, choosing the weights to maximize the minimum eigenvalue of $\boldsymbol{\Psi}$ ensures the best possible convergence rate for the PGD algorithm. This provides a novel insight into choosing the optimal weights. Specifically, the minimum eigenvalue of the weighted design matrix is maximized when the weights are *distributional covariate balancing* where the weighted sample covariance matrices for each treatment are approximately the same. In cases where the covariates are discrete and one-hot encoded, the optimal weights exactly balance the covariate distributions given treatments. We will clarify these points in the revision.\
>Second, our generative models for ITR-Learning are not restrictive. It is common in statistical literature to assume a probabilistic model for the model parameter, ensuring that the model is identifiable and estimable under reasonable conditions. The resulting problems from our generative models correspond exactly to the weighted convex optimization formulation of ITR-Learning in eq.3-4.\
>The reviewer may find the linear decision function class restrictive. However, the linear function class, in principle, contains the polynomial function class. By taking the covariate powers up to M, the linear function class becomes the class of degree-M polynomial functions. Also, the linear function class can serve as a useful approximation to nonlinear decision functions. This linear approximation captures essential patterns in the data and makes interpretable treatment decisions, crucial for practitioners.\
>Additionally, our statistical guarantee introduces new techniques and results to the ITR literature. We provide non-asymptotic estimation guarantees using uniform concentration inequalities and Berry-Esseen theorem, which give explicit bounds on the estimation error for a given finite sample size. We also unify our computational and statistical guarantees to obtain joint computational and sample complexity results, demonstrating how the true model parameters can be recovered.\
>We will incorporate these points in the revision.
>**Originality:** We kindly ask you to refer to the global response.
>**Q1:** Yes, the second "=" in equation (2) can be derived using standard causal assumptions: positivity, conditional ignorability, and consistency.
>**Q2:** Thank you for pointing that out. The vector $u$ is a random treatment vector in $\mathbb{R}^K$, corresponding to the random treatment A. We will add the definition in the revision.
>**Q3:** The sum-to-zero constraints hold for $\delta_k(x)$ for all $x$ to ensure model identifiability. However, estimating K different decision functions that satisfy the sum-to-zero constraint is challenging. An alternative approach is to use AD-Learning framework, which assumes $\sum_k u_k=0$. It leads to $\sum_k u_k^T f(x)=0$ for any given decision function $f$ and covariate $x$.
>**Q4:** The updated Figure 1 can be found in the response pdf.
>**Q5:** Thank you for your questions and suggestions. First, the left subplot highlights the role of energy weights. Even in RCT, finite sample imbalances in covariates can occur. Energy weights help reduce these imbalances, leading to observed improvements. Second, for comparison "observational vs RCT+linear", the table below shows the mean accuracies (n=1000, 100 rep).
>
|p|AD (Obs)|AD+e(Obs)|AD(RCT)|AD+e(RCT)|
|-|-|-|-|-|
|20|0.844|0.905|0.865|0.895|
|40|0.830|0.893|0.854|0.900|
|60|0.822|0.894|0.857|0.895|
>
>The performance improvement with energy weights compared to IPW is greater in the observational study within the same rule. This comparison directly assesses the effect of weights in less challenging scenarios. We will include these results in the revision. Lastly, regarding the comparison "observational+linear vs nonlinear", we have already conducted simulation studies in Appendix Sec.H (Case 1 vs 3).
>
|p|AD(nonlinear)|AD+e(nonlinear)|AD(linear)|AD+e(linear)|
|-|-|-|-|-|
|20|0.471|0.572|0.782|0.845|
|40|0.440|0.542|0.774|0.831|
|60|0.419|0.531|0.764|0.825|
>
>The performance improvement with energy weights compared to IPW is greater under the nonlinear rule, indicating the robustness of energy weights.
>**Q6:** Thank you for your question. The high SE is due to the dataset's variability. We used a small portion of the 2500 data points for training, as done in [43], mimicking clinical trials with small sample sizes. Even with 1200, it's still less than 50% of the total dataset.\
>Increasing the training set generally decreases SE, but if the added data has more noise, SE may increase. Since this pattern is not specific to any method, as also observed in [43], it suggests the issue is related to the data nature rather than a particular method.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have adjusted the score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer 1Znf for raising the score and reconsideration. Please let us know if you have any other concerns or questions that we can address. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for reviewing our work and providing valuable comments. We appreciate the time and effort you have taken to provide feedback on our paper.
In response to your suggestions, we have made the following updates to Figure 1 in the one-page author's response:
- **Error bar**: We have updated Figure 1 to include error bars, which provide a clearer representation of the variability in the results.
- **Improved performance of penalized approach**: We identified ways to improve the performance of the penalized approach to ensure a fair comparison with our proposed method. We have implemented these improvements and updated the corresponding results. We would like to extend our special thanks to reviewer kG5t for the valuable comments regarding the performance comparison. The performance gap between the constrained and penalized approaches has decreased but remains significant, highlighting the advantages of our proposed method.
Regarding the **originality** of our work, while it is true that our work builds on existing components such as balancing weights and AD-Learning, our contribution lies in providing a unified framework that combines these approaches in an effective way. We demonstrated theoretical justifications for the combined approach, which, to our knowledge, have not been previously established.
For example, in the appendix, we prove that outcome augmentation with our suggested balancing weights yields an estimator with minimum variance. Previous research has shown that under traditional inverse probability weighting, the model parameter after outcome augmentation achieves the minimum solution, but there were no results for other types of balancing weights. Our work extends these results to a more general setting, providing new insights and theoretical guarantees for the use of balancing weights.
Additionally, our unified framework allows for a more comprehensive understanding of the interplay between different components, leading to improved performance and robustness. We believe that these theoretical contributions provide significant value and advance the field beyond simply combining existing methods.
We also appreciate Reviewer **kG5t** for raising the following insightful question:
>* The author claims that using distributional covariate balancing weights rather than Inverse Probability Weighting (IPW) can reduce the finite sample bias in ITR-Learning. Can the author theoretically connect this reduction to the findings in Theorem 3.5?
Our theoretical analysis does indeed justify the use of distributional covariate balancing weights. Our optimization landscape analysis demonstrates that the strong convexity and smoothness of the convex optimization formulation for the ITR problem depend crucially on the following weighted design matrix $\Psi$ in Eq. (8). While we are free to choose the weights, our analysis provides a guiding principle for selecting these weights. Specifically, the strong convexity parameter of the ITR landscape is the minimum eigenvalue of the expected design matrix $E[\Psi]$, which we desire to be as large as possible. Namely, our analysis shows that to achieve an improved convergence rate for the PGD algorithm (Thm. 3.3) and a smaller statistical estimation error (Thm. 3.5), we need to choose the weights to maximize the minimum eigenvalue of $\Psi$. We acknowledge that this point was not clearly conveyed in our concise statement of Thm. 3.5 in the initial submission, where many important constants were suppressed in $O(\cdot)$ notations. In the revision, the high-probability bound (12) in Thm. 3.5 will read as
\begin{align}
\lVert B_\star - B_T \rVert_F \le \frac{C\sqrt{(p/n)\log \epsilon^{-1}} }{\mu} + \frac{8 \lambda_2 \lVert B_\star \Vert_F }{\mu},
\end{align}
where $\mu$ is essentially the minimum eigenvalue $\lambda_{\min}(E[\Psi])$ of the expected weighted design matrix (see Eq. (10)). The number $n$ of the required sample size must be large enough to satisfy $\frac{(\sum_{i=1}^{n} w_{i}^{2})^{3}}{(\sum_{i=1}^{n} w_{i}^{3})^{2}} \ge C_{1} \epsilon^{-2}$ for some explicit constant $C_{1}>0$. The first term in this error bound represents the statistical error and the second term represents the bias introduced by the L2-regularization. Notice that both terms are proportional to $1/\mu$, which is essentially $1/\lambda_{\min}(E[\Psi])$. Thus, choosing the weighting function $w(A,X)$ to maximize $\lambda_{\min}(E[\Psi])$ helps minimize the overall statistical estimation error. Another interesting point we would like to add is that when maximizing $\lambda_{\min}(E[\Psi])$ encourages us to choose highly heterogeneous balancing weights, we get increased sample complexity through the bound $\frac{(\sum_{i=1}^{n} w_{i}^{2})^{3}}{(\sum_{i=1}^{n} w_{i}^{3})^{2}} \ge C_{1} \epsilon^{-2}$ (e.g., if $w_{i}\equiv 1$, this is $n\gtrapprox \epsilon^{-2}$, but if $w_{1}=n,w_{2}=\dots=w_{n}=0$, this may not be satisfied for any $n$).
Very interestingly, the minimum eigenvalue of the weighted design matrix is maximized when the weights are `distributional covariate balancing', making the empirical covariate distributions conditional for each treatment approximately the same. In the special case where the covariates are discrete and one-hot encoded, the optimal weights are those that exactly balance the covariate distributions given treatments (we can provide a concrete example). Therefore, maximizing the minimum eigenvalue of the weighted design matrix $\Psi$ provides an explicit spectral condition for obtaining effective distributional covariate balancing weights. We will elaborate on this point and provide further explanations in the revision.
We hope these updates address your concerns and further clarify the contributions and findings of our work. Thank you once again for your insightful feedback.
Pdf: /pdf/2d339de41084013b9b6431d8b73c2de2e019a2de.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Where Do Large Learning Rates Lead Us? | Accept (poster) | Summary: This paper is an empirical study focusing on the effect of large learning rates (LRs) in neural network training. The authors aim to answer two main questions:
1 How large an initial LR is required for optimal quality?
2 What are the key differences between models trained with different LRs?
The study reveals that only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results after fine-tuning with a small LR or weight averaging. The authors observe that using LRs from this optimal range allows for the optimization to locate a basin that only contains high-quality minima.
Strengths: 1. The authors conduct a detailed empirical analysis on both of the above-mentioned problems in a controlled setting.
2. The study discovered that only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results after fine-tuning with a small LR or weight averaging. The authors observed that using LRs from this optimal range allows for the optimization to locate a basin that only contains high-quality minima. This is a novel insight into the geometry of the loss landscape in neural network training.
3. The authors also find that initial LRs result in a sparse set of learned features, with a clear focus on those most relevant for the task.This finding contributes to our understanding of feature learning in neural networks.
Weaknesses: 1. The experiments in the study are limited to specific datasets (CIFARor synthetic) and neural network architectures (Resnet). Therefore, the findings may not consistently generalize to other settings. While the study offers valuable practical implications and potential explanations for widely accepted practices, these findings require further validation in more complex practical scenarios with more intricate network architectures.
2. The conclusion that “only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results” is hard to apply in practice, as determining the convergence threshold beforehand is challenging and problem-specific.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. What is the appropriate definition of the “convergence threshold”? How can we determine the convergence threshold in practice without the need to traverse all possible LRs?
2. How can we practically apply the takeaway 1?
3. In Figure 3, a majority of the dots remain flat in the rightmost region. I am interested in further clarification from the authors on how they arrived at their conclusion that “both angular distances and error barriers grow as PLR increases, continuing the trend established in subregime 2B” (From Ln 239 to 240).
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have conducted enough discussion regarding the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thoughtful review of our work! We now address the raised concerns and questions.
> The experiments in the study are limited to specific datasets (CIFARor synthetic) and neural network architectures (Resnet).
We have conducted additional experiments showing that our findings transfer to other practical scenarios. Please see the general comment for more detail.
> The conclusion that “only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results” is hard to apply in practice, as determining the convergence threshold beforehand is challenging and problem-specific.
We fully agree with this remark, however, in our general comment we also point out that precisely determining this range (subregime 2A) is often not necessary to achieve a good final solution as long as a more advanced LR schedule is used than a simple LR drop.
**Answering Q1**
The convergence threshold is understood as a value in the LR range that divides the LRs allowing the loss to converge (if trained with these fixed LR values) and the remaining LRs. In theory, finding this threshold could be done relatively efficiently using, e.g., binary search, however, as we answered above, it is not needed in practice.
Please refer to our general comment for a more detailed discussion on this topic.
**Answering Q2**
Takeaway 1 essentially provides a recipe to obtain optimal solutions after fine-tuning with a constant small LR or weight averaging: start training with a moderately high LR from subregime 2A, a relatively narrow range above the convergence threshold. However, as we show in Appendix D, training even with significantly larger initial LRs from subregime 2B can get similar results if one chooses just a bit more complex LR schedule. Hence, we argue that for practical schedules with gradual LR decay it is often sufficient to take some reasonably large initial LR, which do not allow for convergence, to achieve good final quality.
Please see the general comment for further discussion.
**Answering Q3**
A totally correct note, we thank the reviewer for the found inaccuracy that we will fix in the following text revision!
What we should have written was: "In the third regime, both the angular distances and the error barriers approach their *upper limits* (angular distance of $\pi/2$, which is a typical angular distance between two independent points in high-dimensional space, and random-guess error), *completing* the trend established in subregime 2B".
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments. I have decided to rise the score.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are very grateful to the reviewer!
We are happy that we managed to address the reviewer's comments and we believe that our discussion will benefit the future revision a lot. | Summary: This paper investigates the benefits of an initial large learning rate (LR) in training neural networks. In particular, the paper identifies three training regimes in a pretraining-finetuning/model averaging context, where different regimes depend on different pre-training LRs and have distinct impacts on the fine-tuned model's final generalization capability. The paper also provides conceptual explanations for the influence of different LRs from loss landscape and feature learning perspectives and empirically justifies the explanations through controlled and real-world experiments.
Strengths: In general, I enjoyed reading this paper. I think this is a solid paper in empirically analyzing the effect/benefits of an initial large learning rate in training NNs.
- The pretraining-finetuning setup considered by the paper captures the practical consideration of LR scheduling/annealing and goes beyond the fixed LR setting in prior work.
- The identified nuanced training subregimes 2A and 2B and their different impacts on fine-tuning are interesting.
- The explanation from the loss landscape perspective through linear mode connectivity is intuitive and well-justified in the experiments.
Weaknesses: - Some of the paper's results have already been shown by prior work, e.g., large LRs can lead to sparse features [1].
- While the paper classifies the training into different regimes depending on the LR, how to choose LR to reach the best regime (2A) seems to remain an issue in practice: while the authors use the oracle **test** accuracy to identify different regimes, in practice one often only has access to the training loss curve (e.g., when training LLMs/large vision models, it is often not clear what constitutes an "oracle" test set to examine the model's generalization ability). It would be great if the authors could discuss whether different training regimes are also identifiable from the training loss alone.
- While the ablations are sufficient, I feel that the overall evaluation of the paper is still a bit limited, given that the paper is mainly empirical. For example, only small-scale datasets such as CIFAR-10/100 and rather small network architectures such as ResNet-18 are considered.
---
[1] SGD with large step sizes learns sparse features. ICML, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the recommendation of the paper in choosing the initial LR in practice?
- Why does subregime 2A reach better minima than regime 1?
- Is it possible to show similar effects on moderate-sized NNs and datasets, e.g., ResNet-50/small ViTs on Tiny ImageNet or some other datasets larger than CIFAR-10/100?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their valuable feedback and overall positive assessment of our work! We respond to the comments and questions as follows.
> Some of the paper's results have already been shown by prior work, e.g., large LRs can lead to sparse features [1].
The results of [1] are closely related to ours, and we discuss them in Section 1.1. However, there are still important differences between this study and ours.
The referenced work examines feature sparsity in terms of the fraction of distinct, non-zero activations at some hidden layer of a neural network over the dataset. In contrast, we study the importance of features in the *input data* for prediction. This approach allowed us to identify feature sparsity as the model preference for the most task-relevant features in the data when training with optimal initial LR values, which is a novel result. In short, our works consider different definitions of “features”: whether they are internal representations of data within a model or patterns in the input data itself.
Furthermore, we came to a different conclusion because we found that feature sparsity (as we understand it) behaves non-monotonically w.r.t. LR, while [1] suggests a monotonic trend.
> how to choose LR to reach the best regime (2A) seems to remain an issue in practice
This indeed could be the case if one wanted to locate the optimal initial LRs for weight averaging or fine-tuning with a decreased LR at the end of training. However, more advanced LR schedules in practice allow for substantially higher initial LRs with similar final performance.
Please see our general comment for further discussion.
> It would be great if the authors could discuss whether different training regimes are also identifiable from the training loss alone.
This is a very good point and we regret that we did not reflect it clearly in the text.
The distinctive features of different regimes were first established in [2] based on various metrics including training loss and gradient norm (please refer to Fig. 1 in [2]). For instance, regimes 1 and 2 can be easily distinguished by the behavior of the training loss/error: whether it reaches low values (convergence) or hovers at some non-zero level. Therefore, no oracle access to the test accuracy is necessary to identify regimes.
We will improve our wording to avoid further misinterpretations.
> the overall evaluation of the paper is still a bit limited
We have conducted additional experiments supporting our findings, please see the general comment for more detail.
**Answering Q1**
As we answered above, based on our findings, in practice one simply needs to choose some large enough LR value from regime 2, i.e., not allowing for convergence, and use a gradually decaying LR schedule to obtain near optimal results.
**Answering Q2**
That is a very intriguing and non-trivial question that requires further investigation. We attempted to provide some initial intuition based on the loss landscape and feature learning analysis. As we briefly discuss in Section 7, we conjecture that feature sparsity and mode proximity/linear connectivity observed in subregime 2A are closely related. We hypothesize that pre-training in this regime helps the model filter out unnecessary input features and emphasize the most relevant ones, which is beneficial for further fine-tuning/SWA. This process is dually represented in the training dynamics as finding a stable linearly connected basin of good solutions in the loss landscape. By contrast, training in regime 1 converges to the nearest minimum admissible by the chosen LR value, not allowing enough time for feature consolidation. Again, this interpretation is currently largely speculative, but deserves further study.
**Answering Q3**
As our additional experiments (in the general comment) suggest, all key findings of our work remain valid in other practical settings, including training on Tiny ImageNet and using the ViT model. Also, for example, our synthetic example, which substantially differs from the CIFAR image classification with convolutional networks, possesses all the properties of training in different regimes (Appendix G). Overall, we expect that our results can be transferred to any overparameterized training setting that allows convergence, so that regime 1 is reachable.
[1] Andriushchenko Maksym et al. SGD with large step sizes learns sparse features. In International Conference on Machine Learning, 2023.
[2] Kodryan Maxim et al.. Training scale-invariant neural networks on the sphere can happen in three regimes. Advances in Neural Information Processing Systems, 2022.
---
Rebuttal Comment 1.1:
Title: Reviewer's response
Comment: Thank you for your response. I appreciate the clarification and the additional experiments. I have also checked the general response and the reviews of other reviewers. At this point, I still maintain a positive rating of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are grateful to the reviewer! We appreciate the feedback provided, which will certainly benefit our work. | Summary: This paper studies the effects of using initial (large) learning rates on the performance of the trained neural networks. Two key questions explored are:
1. how large are the optimal initial learning rates?
2. what's the difference between the model trained by different initial learning rates?
The paper identifies the optimal initial learning rates as a small range of learning rates slightly above the convergence threshold. Furthermore, it shows that an initial learning rate in this range locates high-quality minima and learns sparse but useful features, which other learning rates fail to do, resulting in worse generalization.
Strengths: **Orginality and Significance:**
To me, understanding the effects of large learning rate neural network training is essential to understand the success of nowaday's deep learning techniques and conventions. This paper empirically answers how large the optimal (constant effective) learning rate should be in terms of its utility for future finetuning with small learning rates (or weight average training). The corresponding findings, to my best knowledge, are new and are enlightening for practical purposes. It is also a reasonable to use loss landscape geometry and feature learning capability to showcase why the corresponding learning rate can (and can not) generalize well after finetuning (or weight average training).
**Quality and Clarity:**
The paper is well written and the conclusions and findings are quite clearly presented in terms of sections and takeaways. The experiments shown in the paper are well conducted to justify the findings.
Weaknesses: 1. The main results are conducted in a fully scale-invariant setting with constraints on the weight norm, using projected SGD. This setup is theoretically sound but unlikely to appear in common practice. But still, the paper shows that the results derived in the controlled setting transfers to the practical setup to some extent.
2. Even though conceptually the optimal range of initial learning rates is identified as the small inverval slightly above the convergence threshold, it is still hard to parse numerically how large the learning rate should be set in practice where one is unable to explicitly find the convergence threshold.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. The conclusion in Line 219 to 220 is a bit strange to me. The previous part in this paragraph argues that finetuning with larger learning rate in this pretraining regime leads to better performance and higher-quality minima. So what does it mean by saying "unstable to fine-tuning with higher learning rates and suboptimal generalization"?
2. In Figure 3, it seems that both the training and testing error barriers in the right part of regime 1 (let's call it regime 1B) and regime 2A are nearly zero. How does this phenomena help to support the separation between the regime 1B and regime 2A, especially in terms of the loss landscape geometry? Or did I have a misunderstanding of these two figures?
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: The architecture and datasets tested are restrictive. The empirical findings of the paper would be more convincing if further experiments are conducted on other popular archs and datasets including VGG and imagenet, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the Reviewer for their positive and constructive feedback on our work! We address the concerns and questions below one by one.
**Scale-invariant setting**
As the Reviewer rightly noted, our main experiments are conducted in a specific scale-invariant setting to provide effective control over the learning rate value. Since our study is more fundamental than practical in nature, we decided to follow this setup in line with prior work examining the effect of learning rate on training dynamics. This allowed us to isolate the impact of scale invariance on the effective learning rate of the model and then extend our main findings to more practical scenarios. In our general comment, we present additional experimental results that further confirm that our claims apply to conventional training settings as well.
**Finding the optimal LR range**
It could indeed be computationally burdensome to find the exact convergence threshold for a given training setup. However, although the optimal LRs for fine-tuning with a constant small LR or weight averaging usually lie in a relatively narrow range just above this threshold, in practice even taking substantially larger initial LRs can give similar results if one chooses a slightly more complex LR schedule (Appendix D). Hence, we may conclude that for regular LR schedules it is not so important to precisely determine the convergence threshold but to choose some reasonably large initial LR above it.
Please see our general comment for further discussion.
**Answering Q1**
Thanks for the comment, this is indeed a poor formulation, which we will correct in the next text revision!
What we wanted to say is the following. LRs from regime 1 allow training to converge to some minima, however these minima a) have non-optimal generalization (compared to the fine-tuned/SWA solutions of subregime 2A) and b) are “unstable” in the sense that increasing LR (within the same regime 1) after convergence can knock the model out of the current minimum to a new minimum, which is perhaps better but still belongs to the minima of the first regime and therefore is not optimal.
**Answering Q2**
Indeed, local geometry, from the point of view of training/test error barriers, cannot be used to separate regimes 1 and 2A, as in fact we don’t have barriers in either case. In the first case, because all the points (pre-trained, fine-tuned, and SWA) simply coincide, which is suggested by the angular distance on the plots. In the second case, because the localized basin contains close high-quality solutions, which are linearly connected. Error barriers are, however, useful for separating subregimes 2A and 2B, as in the latter case fine-tuned/SWA solutions lose linear connectivity. At the same time, regimes 1 and 2 can be easily distinguished by the behavior of the training loss/error: whether it reaches low values (convergence) or hovers at some non-zero level (see, e.g., Figure 1 in [1]).
We will clarify this more in the text.
**Limitations**
> The architecture and datasets tested are restrictive.
We have conducted additional experiments supporting our findings, please see the general comment for more detail.
[1] Kodryan Maxim et al. Training scale-invariant neural networks on the sphere can happen in three regimes. Advances in Neural Information Processing Systems, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your rely. I appreciate your answering to my questions, and I agree with your comments. Also the additional experiments do further demonstrate the applicability of the paper's findings in practical setups. I have no further questions and will remain my score as 7.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We are very grateful to the reviewer for their useful comments and high score given to our work! | null | null | Rebuttal 1:
Rebuttal: We kindly thank all the reviewers for their constructive and valuable feedback that will help us further improve our paper!
We are very pleased that the reviewers assessed our findings as novel, practically important, and providing additional insights into the loss landscape geometry and feature learning in neural networks.
Below we would like to address two common issues raised in all reviews.
## Limited experimental evaluation
To show that our findings can be extrapolated to more general scenarios, we conducted additional experiments with practical ResNet-18 on Tiny ImageNet and ViT on CIFAR datasets. We attach a PDF with the corresponding plots to this comment. All results will be incorporated in the next version of the paper.
In general, our main conclusions remain the same. We can clearly observe regimes 1 and 2 (regime 3 is unstable in practical settings) as well as divide the second regime into subregimes 2A and 2B. The best fine-tuned solutions are achieved in subregime 2A, which locates a linearly connected basin and depicts a clear feature selection trend w.r.t. different frequency bands in input images. At the same time, training in regime 1 often fails to converge due to strong augmentations and exhibits catapults when fine-tuning with $FLR \gg PLR$, while fine-tuning from subregime 2B leads to diverse suboptimal solutions.
**ViT on CIFAR:** The general trends are the same as described above, although the advantage of pre-training in subregime 2A is slightly less obvious in this setup.
Notably, feature learning in transformers is different from convolutional networks, since they inherently capture lower frequencies in the data [1]. We can still see that the role of the most important features (low-frequencies in this case) grows towards 2A. Interestingly, however, the importance of midrange frequencies also peaks in subregime 2A for both datasets, which is consistent with our intuition that mid-frequencies are essential for natural image classification.
We also consider different partitions into low and midrange frequencies to answer the question of whether ViTs really rely more on the lower frequency part of the spectrum, or whether its midrange simply “starts earlier” compared to convolutional models. Figure 2 in the attached PDF shows that the latter appears to be the case. That is, if you select 5-24 or 3-24 bands for mid frequencies, then the midrange will dominate over the low-frequency bands in terms of the corresponding test accuracies.
**ResNet-18 on Tiny ImageNet:** All trends are very much alike practical experiments reported in the paper. A small drop in the pre-train accuracy for $PLR = 10^{-3}$ is due to the periodic behavior [2]: the final pre-training epoch occurs at the beginning of a period. Similar effects are reported in the paper when training ResNet-18 on CIFAR-100. That could be fixed by choosing a different random seed and/or pre-training epoch budget.
[1] Namuk Park and Songkuk Kim. How do vision transformers work? In International Conference on Learning Representations, 2021.
[2] Lobacheva Ekaterina et al. On the periodic behavior of neural network training with batch normalization and weight decay. Advances in Neural Information Processing Systems, 2021.
## What the “convergence threshold” is and how it can be used in practice
In general, by the convergence threshold (CT) for a given model and training setup we mean a learning rate (LR) value that separates regimes 1 and 2. In other words, training with a constant LR below CT leads to convergence to a minimum, while taking a larger constant LR prevents the optimization from converging. Convergence here is defined in a conventional sense, i.e., the optimized functional (training loss) closely approaches its global minimum by the end of training; in a simplified training setup without advanced data augmentations, it may be tracked by the ability of the model to fit the training data (i.e., reach ~100% training accuracy). We realize that this definition is still not constructive, and, as we show in Appendix C, CT is better understood as a small zone within the overall LR range, since the exact threshold may slightly shift depending, e.g., on the epoch budget.
However, the purpose of our work is not to quantitatively obtain the exact ranges of optimal LR values, but rather to qualitatively explore the difference between training with various LRs. The fundamental question was whether the small LRs are suboptimal to start training with, even if we ensure convergence? We indeed found that larger initial LRs, while not allowing for convergence by themselves, may lead to notably better final solutions, which is reinforced by the loss landscape and feature learning intuition. Thus, the choice of the LR influences not only the properties of the minimum achievable with this LR (as in regime 1), but the entire training process, even long before convergence, including feature learning and the optimization trajectory in the loss landscape.
Speaking of practical implications, despite our advice to take the initial LRs “slightly above the convergence threshold” for optimal fine-tuning with a constant small LR or weight averaging at the end of training, higher LRs from subregime 2B may lead to similar final quality if a more advanced LR schedule is used (Appendix D). As we discuss in part in Section 7, gradually decreasing LR can correct for an initial value that is too high. Accordingly, most practical LR schedules are designed in that manner: starting with a high value and gradually decreasing it as training progresses. Therefore, given a proper LR schedule, to achieve a good final solution in practice one essentially only needs to ensure that the initial LR value is reasonably large, i.e., it does not allow for convergence but also does not lead to numerical issues during training.
We will make more effort to elucidate these details in the next revision of the text.
Pdf: /pdf/76ac943a20764c06a5e92d051d9471bd2210a789.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness | Accept (poster) | Summary: The paper investigates the relationship between the transferability of adversarial examples and the flatness of adversarial examples. The paper shows that flatness alone is not sufficient to guarantee transferability. Based on this theoretical result, it derives an optimization method for adversarial examples that improves transferability. This proposed method is evaluated empirically and compared to a wide range of baselines.
Strengths: - sound theoretical motivation and result
- the assumptions are reasonable and clearly stated
- comprehensive empirical evaluation
- evaluation on real-world applications
- interesting insights on the link between flatness and transferability of adversarial examples
Weaknesses: - the paper does not discuss how its insights can be used to improve defense mechanisms
- The presentation of the the proof of Thm. 3.1 could be improved.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Eq. 17: why is $p(x+\delta)\leq p(x)$?
- The usage of D in the statement of Thm. 3.1 makes a mapping of terms in the proof to the result unnecessarily cumbersome.
- How does the proposed method compare to PGN [1] which improves transferability through flatness?
- The computational problem of measuring flatness, i.e., the second order gradient components, can be alleviated by considering relative flatness [2], which has also been applied to adversarial examples [3]. Could this be used as an alternative or means to improve the proposed method?
[1] Ge, Zhijin, et al. "Boosting adversarial transferability by achieving flat local maxima." Advances in Neural Information Processing Systems 36 (2023): 70141-70161.
[2] Petzka, Henning, et al. "Relative flatness and generalization." Advances in neural information processing systems 34 (2021): 18420-18432.
[3] Walter, Nils Philipp, et al. "The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective." arXiv preprint arXiv:2405.16918 (2024).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper clearly states assumptions and limitations throughout the manuscript. It would, however, be beneficial to justify the theoretical assumptions and discuss the resulting limitations (e.g., the assumption on smoothness of the target distribution, or that probabilities go to zero for $x\rightarrow\pm\infty$).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** The presentation of the the proof of Thm. 3.1 could be improved & The usage of D in the statement of Thm. 3.1 makes a mapping of terms in the proof to the result unnecessarily cumbersome.
**Response:**
In response to your suggestion, we have revised the proof and replaced $D$ to enhance clarity and readability.
---
**Q2:** why is $p(x+\delta) \leq p(x)$?
**Response:**
Natural samples are drawn from the real-world distribution that the model was designed to handle.
In contrast, adversarial examples are not a direct product of this distribution; they are artificially crafted through specific techniques that exploit the model’s vulnerabilities.
Therefore, their occurrence in real-world scenarios, where data is not typically manipulated in such a adversarial manner, is far less common, i.e., $p(x+\delta) \leq p(x)$.
---
**Q3:** How does the proposed method compare to PGN [1] which improves transferability through flatness?
**Response:**
We employ ResNet50 as the proxy model to craft adversarial examples for 1000 natural samples, using our attack and PGN.
The results, summarized in the table below, demonstrate the superior performance of our attack over PGN.
For instance, our attack achieves an impressive ASR of 99.7% on EfficientNet, significantly surpassing the 81.6% achieved by PGN.
| Target Model | PGN [1] | Ours |
|:------------:|:----:|:----:|
| EfficientNet | 81.6 | 99.7 |
| VGG19 | 86.3 | 98.5 |
| ConvNet | 65.8 | 94.6 |
| ViT | 51.1 | 93.8 |
---
Q4: The computational problem of measuring flatness, i.e., the second order gradient components, can be alleviated by considering relative flatness [2], which has also been applied to adversarial examples [3]. Could this be used as an alternative or means to improve the proposed method?
**Response:**
We have carefully read [2,3] and found them to be quite enlightening.
Indeed, relative flatness shares a significant similarity to the second-order gradient component in our bound.
Nonetheless, calculating relative flatness presents a considerable challenge, as shown in [3], which only addresses the relative flatness concerning the penultimate layer.
Similarly, directly penalizing the relative flatness of adversarial examples poses substantial computational difficulty.
We value the implications of these inspiring works [2,3] and will include them in the revised manuscript.
We also consider exploring relative flatness as a promising avenue for future research.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Dear authors,
Thank you for your response. I appreciate the additional results on PGN.
Regarding the assumption $p(x+\delta) \leq p(x)$ I suggest making that an explicit assumption. While this assumption is intuitively reasonable, we can construct target distributions for which it is likely broken: E.g., a nearly uniform distribution with low-amplitude high frequency waves in the pdf would make it likely that an example is sampled close to a valley of the pdf and close examples likely have a higher probability.
Thank you for also answering my question regarding relative flatness. | Summary: This paper focuses on the transferability of adversarial examples. The authors first derive an upper bound for the transferability loss used in the paper. Then, they propose a new loss function based on the derived bound to increase the adversarial transferability. The proposed TPA method is tested in both classic models and real applications.
Strengths: 1. This paper is well-organized.
2. The proposed method is tested in real-world applications.
Weaknesses: - The theoretical claims lack a strong and important assumption. Theorem 3.1 is based on a strong assumption that the source model and the target model have the same loss on all inputs ($L(F'(x),y)-L(F(x),y)\approx 0$), which is used in Line 482 in Appendix A. However, this strong and important assumption is not explicitly stated in Theorem 3.1. Moreover, this assumption is not verified and may be not practical in different models.
- Theorem 3.1 cannot well reflect the bound of the adversarial transferability of inputs. First, there seems to be a typo w.r.t. the definition of $D(x+\delta,y)$ in Line 125. It is supposed to be $D(x+\delta,y)=\Vert L(F'(x+\delta),y) - L(F(x+\delta),y) \Vert$. Second, according to the proof in Appendix A, the term on the left side of the inequality should be $\Vert D(x+\delta,y)\Vert_2^2$. Please check the claim and the proof. Third, Theorem 3.1 only provides a bound for the "transfer-related loss term", but the adversarial transferability of adversarial examples also depends on the local effectiveness term. Thus, it is unclear whether the loss function based on this bound conflicts with the local adversarial loss.
- The attack success rates in experiments are not reported with the error bar or the standard deviation.
- It is unknown in Section 6 how the used 100 adversarial examples are crafted, e.g., which proxy model is used, which dataset is used, and how 100 examples are selected.
- The evaluation in Section 6 is conducted by only one volunteer, which is unreliable.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weakness part.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors did not discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1\&2:** The theoretical claims lack a strong and important assumption. \& Theorem 3.1 cannot well reflect the bound of the adversarial transferability of inputs.
**Response:**
For first issue, we do not assume $L(F'(x),y)-L(F(x),y) \approx 0$ (See the derivation below).
For second issue, it is a typo and it should be $D(x+\delta,y)= L(F'(x+\delta),y) - L(F(x+\delta),y)$.
As you mentioned, the left side of our bound (Theorem 1) should be in squared terms.
Finally, based on revised Theorem 3.1, we can derive the bound of adversarial transferability.
Let us briefly restate the revised proof to address your questions.
Based on $\mathcal{L}(F'(x),y)-\mathcal{L}(F(x),y) = L(F'(x+\delta),y) - \nabla L(F'(x+\delta),y)^\top \delta- L(F(x+\delta),y)+\nabla L(F(x+\delta),y)^\top \delta$, we have $ D(x+\delta,y) = D(x ,y) + \nabla L(F'(x+\delta),y)^\top \delta - \nabla L(F(x+\delta),y)^\top \delta$.
Taking the $L_2$-norm on both sides and then taking expectiation, we obtain $\int p(x) ||D(x+\delta,y)||_2^2dx\leq \int p(x) ||D(x,y)||_2^2dx+\int p(x)|| \nabla L(F'(x+\delta),y)^\top \delta-\nabla L(F(x+\delta),y)^\top \delta||_2^2dx$.
The main result in Appendix A is Equation 21, which does not use $L(F'(x),y)-L(F(x),y) \approx 0$. Therefore, we have
$$\int p(x) || \nabla L(F'(x+\delta),y)^\top \delta-\nabla L(F(x+\delta),y)^\top \delta||_2^2 dx \leq (1+C) \int p(x) ||\delta||_2^2 || \nabla \log F(x+\delta) ||_2^2 dx + 2 \sum \int p(x) ||\delta||_2^2 |\nabla^2 \log F(x+\delta)[i,i]| dx + C \int p(x) ||\delta||_2^2 ||\nabla (\log F'(x) - \log F(x))||_2^2 dx.$$
Combining the above equations, we get our revised bound (revised Theorem 1):
$$\mathbb{E} ||D(x+\delta,y)||_2^2 \leq \mathbb{E}\{ ||D(x,y)||_2^2 + C ||\delta||_2^2 ||\nabla D(x,y)||_2^2 \} + (1+C) \mathbb{E} \{ ||\delta||_2^2 || \nabla \log F(x+\delta) ||_2^2 \}+2\mathbb{E} \{ \sum \mathbb{E} ||\delta||_2^2 |\nabla^2 \log F(x+\delta)[i,i]| \}.$$
Now, let us consider the bound for adversarial transferability.
Specifically, based on $L(F'(x+\delta),y) = D(x+\delta,y)+L(F(x+\delta),y)$ and taking the $L_2$-norm on both sides and applying basic norm properties, we get: $$ ||L(F'(x+\delta),y)||_2^2 \geq | || L(F(x+\delta),y) ||_2^2 - || D(x+\delta,y) ||_2^2 |. $$
Since $L \geq 0$ and higher loss of $x+\delta$ on the proxy model than the target model, there is $\mathbb{E} ||L(F'(x+\delta),y)||_2^2 \geq \mathbb{E} || L(F(x+\delta),y) ||_2^2 - \mathbb{E} || D(x+\delta,y) ||_2^2.$
The bound of $\mathbb{E} || D(x+\delta,y) ||_2^2$ is already provided. Thus we can obtain the bound of $\mathbb{E} ||L(F'(x+\delta),y)||_2^2$ (See Response for Reviewer #x5ig's Q1).
The loss of adversarial examples on the proxy model is our "local effectiveness term".
This inspires design of our attack, as shown in Equation 4 in the original manuscript, where the first term maximizes the local effectiveness term and the second term minimizes the bound about transfer-related loss term.
---
**Question 3:** The attack success rates in experiments are not reported with the error bar or the standard deviation.
**Response:**
We have conducted experiments to calculate the error bars and standard deviations and included them in revised manuscript.
Below are some results, where we use ResNet50 as the proxy model and run attacks in five trials to report (ASR $\pm$ standard deviation).
Our method not only achieves ASRs but also enjoys smaller deviations.
| Attack | DenseNet121 | EfficientNet | InceptionV3 | ConvNet | ViT |
|:------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| RAP | 95.05$\pm$0.41 | 95.15$\pm$0.38 | 93.76$\pm$0.56 | 90.62$\pm$0.37 | 62.54$\pm$0.37 |
| BSR | 96.98$\pm$0.36 | 95.07$\pm$0.41 | 93.39$\pm$0.33 | 88.27$\pm$0.21 | 82.14$\pm$0.36 |
| Ours | 99.68$\pm$0.08 | 99.55$\pm$0.04 | 98.74$\pm$0.11 | 94.37$\pm$0.07 | 93.54$\pm$0.08 |
---
**Question 4:** It is unknown in Section 6 how the used 100 adversarial examples are crafted, e.g., which proxy model is used, which dataset is used, and how 100 examples are selected.
**Response:**
We randomly select 100 samples from the benchmark evaluation dataset ImageNet [1].
Using these samples, we generate 100 adversarial examples with our attack method, employing the default hyperparameters and ResNet50 as the proxy model.
We have added these details in the revised manuscript.
---
**Question 5:** The evaluation in Section 6 is conducted by only one volunteer, which is unreliable.
**Response:**
We have recruited two additional volunteers to conduct evaluation.
The table below reports the average scores and variance from all three volunteers (the original evaluator + two additional volunteers).
The results show a high degree of consistency provided by the three volunteers, reinforcing the superior performance of our method.
| Score | Classification | Object Detection | Google Search | Bing Search | Yandex Search | Baidu Search | GPT-4 | Claude3 |
|:-----:|:--------------:|:----------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| 5 | 2$\pm$1 | 2.33$\pm$0.58 | 0$\pm$0 | 0$\pm$0 | 0$\pm$0 | 0$\pm$0 | 1.67$\pm$0.58 | 0.33$\pm$0.58 |
| 4 | 7$\pm$2 | 21.67$\pm$0.58 | 10.67$\pm$1.15 | 6.33$\pm$0.58 | 6.33$\pm$1.53 | 4.33$\pm$0.58 | 13.33$\pm$1.53 | 11.33$\pm$0.58 |
| 3 | 13.33$\pm$0.58 | 7.67$\pm$0.58 | 17$\pm$1 | 11$\pm$1 | 12.33$\pm$1.15 | 5$\pm$1 | 28.33$\pm$1.15 | 26$\pm$1 |
| 2 | 9$\pm$1 | 5.33$\pm$2.31 | 17$\pm$1 | 20.67$\pm$0.58 | 16.67$\pm$1.53 | 10.33$\pm$0.58 | 29.67$\pm$0.58 | 31$\pm$2.65 |
| 1 | 68.67$\pm$1.15 | 63$\pm$2.65 | 55.33$\pm$0.58 | 62$\pm$1 | 64.67$\pm$1.53 | 80.33$\pm$1.53 | 27$\pm$1 | 31.33$\pm$1.15 |
---
[1] On success and simplicity: A second look at transferable targeted attacks
---
Rebuttal Comment 1.1:
Title: Discussion Inquiry
Comment: Dear Reviewer,
We thank you for the precious review time and valuable comments. We have provided responses to your question and the weakness you mentioned. We hope this can address your concerns.
We hope to further discuss with you whether or not your concerns have been addressed appropriately. Please let us know if you have additional questions or comments. We look forward to hearing from you soon.
Best regards,
Authors
---
Reply to Comment 1.1.1:
Title: Looking forward to your feedback
Comment: Dear Reviewer QVii,
Sorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.
Should this be the case, we are encouraged that you raise the final rating to reflect this.
If there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.
We are looking forward to your reply. Thank you for your efforts in this paper.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the response. Most of my concerns about experiments are addressed. However, after reading the responses to all reviewers, I still have concerns about the theoretical claims in the paper. For example, some assumptions and intuitions like $p(x+\delta)\le p(x)$, $F'(x+\delta)>F(x+\delta)$, $1\ge F'(x+\delta)$ are used in the proof but not clearly listed. These assumptions are all important assumptions for the proof, but most of them are simply claimed to be "natural" or "should be" without a formal discussion and verification in the paper. On the other hand, I'm also confused about Eq.(12) mentioned by Reviewer x5ig.
---
Rebuttal 2:
Title: Response
Comment: Thank you for your feedback. We have explicitly made the assumptions ($p(x+\delta) \leq p(x)$ and $F'(x+\delta) \geq F(x+\delta)$) in the revised manuscript to avoid any potential confusion.
To clarify, $F'(x+\delta)$ represents the probability that the target model correctly classifies $x+\delta$, i.e., $0 \leq F'(x+\delta) \leq 1$.
Regarding $F'(x+\delta) \geq F(x+\delta)$ and the confusion surrounding Equation 12, we have provided a detailed explanation in our response to Reviewer x5ig.
Please refer to that for further details.
Concerning $p(x+\delta) \leq p(x)$, there is substantial supporting evidence from literature [1,2,3] and practical considerations.
Adversarial examples are artificially crafted through specific algorithms and optimization processes, tailored to particular models and tasks.
In practice, the distributions we encounter are typically those generated by the natural processes that generate the data, and these do not favor adversarial examples over natural ones.
As such, they are not naturally occurring, and their probability of appearance in the real world is significantly lower compared to natural samples.
Formally, let us consider the following derivation:
$$
p(x+\delta) = \sum p(x+\delta, y_i) = \sum p(y_i) p(x+\delta|y_i) = \sum p(y_i) p(x|y_i) + \sum p(y_i) \nabla p(x|y_i)^T \delta = p(x) + \sum p(y_i) \nabla p(x|y_i)^T \delta.
$$
Since the ground-truth label $y_g$ for a specific natural sample $x$ is fixed, we can simplify the above equation to:
$$
p(x+\delta) = p(x) + p(y_g) \nabla p(x|y_g)^T \delta.
$$
Intuitively, $\delta$ should satisfy $p(x|y_g)^T \delta \leq 0$, which means that $\delta$ does not increase the probability of the ground-truth label $y_g$ after being applied to $x$.
This is because $\delta$ is generated by a proxy model, which is trained on the data distribution $p(x,y)$.
Thus, $p(x|y_g)^T \delta > 0$ is counterintuitive and unlikely, unless the proxy model has learned a distribution that is negatively associated with $p(x,y)$.
In practice, DNNs, especially those used in real-world applications, perform well on $x$.
For poorly performing DNNs, they may already have low accuracy on $x$, and thus generating adversarial examples for them is trivial.
Therefore, the assumption that $p(x+\delta) \leq p(x)$ is reasonable.
We hope these clarifications help to strengthen the rationale behind our assumptions and provide a clearer understanding of the context.
We also highlight that this manuscript serves as the first theoretical study on adversarial example transferability, and we believe it can offer the community deeper insights into understanding adversarial example transferability.
We look forward to your reply and hope that this addresses your concerns.
---
[1] On the (Statistical) Detection of Adversarial Examples
[2] Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A Survey
[3] Interpreting Adversarial Examples in Deep Learning: A Review
---
Rebuttal Comment 2.1:
Title: Discussion Inquiry
Comment: Dear Reviewer QVii,
Thank you for your ongoing efforts in helping us improve the quality of this manuscript. We greatly appreciate the time and attention you have dedicated.
We have responded to your latest comments. Specifically, you mentioned concerns regarding the assumptions and Equation (12), which were initially pointed out by Reviewer x5ig and Reviewer dRSh. We are pleased to report that Reviewer x5ig and Reviewer dRSh have expressed satisfaction with our response, indicating that these concerns have been adequately addressed for them.
As the discussion period draws to a close, we would like to reach out to see if you have any remaining questions or unresolved issues. If everything is now clear, we would be grateful if you could consider updating your evaluation to reflect this.
Once again, thank you for your constructive feedback and for your invaluable contribution to the development of this manuscript. We look forward to hearing from you soon.
---
Reply to Comment 2.1.1:
Title: Anticipating your response
Comment: Dear Reviewer QVii,
Sorry to bother you again. We appreciate the time and attention you have dedicated to this manuscript. With only one day left in the discussion period, we are eager to hear your feedback on whether our recent response has addressed your concerns.
Notably, in your initial feedback, you indicated that the original concerns had been addressed. The remaining concerns stem from other reviewers' opinions. It is encouraged tha Reviewer x5ig and Reviewer dRSh indicated that these concerns have also been satisfactorily resolved.
If you have any remaining concerns, please do not hesitate to let us know; we are more than happy to clarify and respond. Engaging in this discussion with you has been a rewarding experience, and your feedback has significantly improved the quality of this manuscript.
We look forward to your feedback.
Best regards,
Authors | Summary: The paper proposes a theoretical investigation into the relationship between the flatness of adversarial examples and their transferability. The authors challenge the prevailing belief that flatter adversarial examples necessarily have better transferability. They introduce a new method called Theoretically Provable Attack (TPA), which optimizes a surrogate of the derived transferability bound, enabling the generation of more transferable adversarial examples. The paper includes extensive experiments demonstrating the effectiveness of TPA on various benchmarks and real-world applications.
Strengths: This paper addresses a highly worthy research topic. The theoretical understanding of adversarial transferability is still under-explored. The experimental results on benchmarks are also very impressive. The authors claim that merely constraining the gradient norm at the adversarial examples is not sufficient to enhance model transferability; it is also necessary to consider second-order gradient information.
Weaknesses: The theoretical analysis in the paper establishes an upper bound that appears to be quite loose (due to Taylor approximations and inequality relaxations). As a result, it lacks sufficient insights into how this upper bound inspired the design of the TPA method presented in Equation (4) of the paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In transfer attacks, the target model ($\hat{F}$ here) is black-box and unknown, while the statement in line 160 of the paper that "a proxy model needs only yield predictions for x that are closely aligned with those of the target model" is not feasible in actual attack scenarios.
2. Furthermore, in Equation (3), the derived upper bound is only related to the target model $\hat{F}$ in the first term, which is confusing. It is noted that the derivation of the second-order gradient component comes from Equation (12). My question is why the gradient of the target model $ \hat{F}(x + \delta)$ disappears from the fourth to the fifth line in Equation (12). Is this due to the application of the integration by parts formula?
3. Please provide a performance comparison of the TPA method with existing methods that only constrain the gradient norm [11, 41]. Is the core difference that TPA uses uniformly distributed noise?
If the authors can satisfactorily address the above questions, I can consider raising my score.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: see the questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:** The tightness of our bound.
**Response:**
We would like to provide some clarifications regarding our bound.
First, as pointed out by Reviewer QVii, the first and second terms in Eq.3 should be squared.
The revised bound in Theorem 1 is: $$\mathbb{E} \{ ||D(x+\delta,y)||_2^2 \leq \mathbb{E} ||D(x,y)||_2^2 + C \mathbb{E} ||\delta||_2^2 ||\nabla D(x,y)||_2^2 \} + (1+C) \mathbb{E} \{ ||\delta||_2^2 || \nabla \log F(x+\delta) ||_2^2 \} + 2 \{ \sum \mathbb{E} ||\delta||_2^2 |\nabla^2 \log F(x+\delta)[i,i]| \}.$$
We denote the terms on the right-hand side of the inequality as $Q$.
Moreover, in our response to Reviewer QVii, we detail the lower bound for the squared loss of the generated adversarial examples on the target model: $$\mathbb{E} ||L(F'(x+\delta),y)||_2^2 \geq \mathbb{E} || L(F(x+\delta),y) ||_2^2 - \mathbb{E} || D(x+\delta,y) ||_2^2 \geq \mathbb{E} || L(F(x+\delta),y) ||_2^2-C.$$
We here conduct an empirical evaluation to examine the effectiveness of our bound.
We craft 1000 adversarial examples using ResNet50 against DenseNet121.
Our estimates show that the sum of squared losses for the examples on the proxy and target models is approximately 280.90 and 76.37, with $\mathbb{E} ||D(x+\delta,y)||_2^2$ of 69.27.
We calculate the value of $Q$ to be 170.75, setting $C$ to 1 due to $C \leq 1$.
The difference between 170.75 and 69.27 is somewhat non-trivial.
However, when we translate this difference into probabilities, it becomes quite minor.
Specifically, the loss for the target model on the generated adversarial examples should approximate $10.50$ ($\mathbb{E} || L(F(x+\delta),y) ||_2^2-C=280.90-170.75=10.50^2$).
This implies that the probability of the target model correctly classifying the examples is at most $e^{-10.50} \approx 2.75 \times 10^{-5}$, which is a rather small number.
In summary, our bound is indeed tight and practical, due to the exponential mechanism and the typically high loss of the generated adversarial examples in the target model.
We also evaluate several other models, as detailed in the table below.
| Target Model | $\mathbb{E} \Vert L(F(x+\delta),y) \Vert_2^2-C$ | Probability Bound |
|:------------:|:---------:|:--:|
| EfficientNet | 182.03 | 4.81 $\times 10^{-5}$ |
| InceptionV3 | 189.53 | 7.05 $\times 10^{-5}$ |
| MobileNetV3 | 161.91 | 1.83 $\times 10^{-5}$ |
| ViT | 215.56 | 0.0003 |
---
**Question 2:** The target model is black-box and unknown.
**Response:**
What we commonly refer to as a black-box scenario actually permits some queries to the target model but restricts access to its architecture and parameters.
This is common across various AI applications, e.g., Google's AI services, which allow users to input data and receive predictions.
Moreover, a truly inaccessible target model would negate the possibility of feeding adversarial examples into it, making black-box attacks trivial.
Therefore, limited access is practical and reflects actual attack scenarios.
---
**Question 3:** Why does the gradient of the target model $F'(x+\delta)$ disappear from the fourth to the fifth line in Equation (12).
**Response:**
The probability of the target model correctly predicting $x+\delta$ should be higher than that of the proxy model, that is, $F'(x+\delta) \geq F(x+\delta)$.
Consequently, we have $-F'(x+\delta) \leq -F(x+\delta).$
Additionally, considering the second derivative of $log(x)$, which is $-\frac{1}{x^2} \leq 0$ we have:
$$- \sum \int p(x)||\delta||_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx \geq - \sum \int p(x)||\delta||_2^2 F(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx.$$
We have added a clear explanation of this step to improve the manuscript's readability.
---
**Question 4:** Comparison of the TPA method with [11, 41].
**Response:**
We employ ResNet50 as the proxy model to craft adversarial examples for 1000 natural samples, using our attack and the attacks proposed in [11,41].
The results, summarized in the table below, demonstrate that our attack significantly outperforms those described in [11,41].
Notice that the attacks presented in [11] and [41] are almost identical in terms of their objective functions and optimization methods, resulting in nearly identical attack success rates.
| Target Model | [11] | [41] | Ours |
|:------------:|:----:|:----:|:----:|
| EfficientNet | 81.6 | 81.2 | 99.7 |
| VGG19 | 86.3 | 86.5 | 98.5 |
| ConvNet | 65.8 | 65.5 | 94.6 |
| ViT | 51.1 | 50.7 | 93.8 |
Formally, the key difference between our attack and those in [11, 41] lies in the use of uniformly distributed noise.
Despite its simplicity, this additional random noise plays a unique and fundamentally important role in enhancing performance, as our theoretical analysis illustrates.
This is also the primary reason why our method achieves notably higher success rates compared to [11, 41].
---
[11] Boosting adversarial transferability by achieving flat local maxima
[41] Gnp attack: Transferable adversarial examples via gradient norm penalty
---
Rebuttal Comment 1.1:
Title: Discussion Inquiry
Comment: Dear Reviewer,
We thank you for the precious review time and valuable comments. We have provided responses to your question and the weakness you mentioned. We hope this can address your concerns.
We hope to further discuss with you whether or not your concerns have been addressed appropriately. Please let us know if you have additional questions or comments. We look forward to hearing from you soon.
Best regards,
Authors
---
Reply to Comment 1.1.1:
Title: Looking forward to your feedback
Comment: Dear Reviewer x5ig,
Sorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.
Should this be the case, we are encouraged that you raise the final rating to reflect this.
If there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.
We are looking forward to your reply. Thank you for your efforts in this paper.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Title: Official Comment by Reviewer x5ig
Comment: Thank you for the detailed rebuttal. However, I still have concerns and a few follow-up questions.
1. You claimed that ``when we translate this difference into probabilities, it becomes quite minor``. However, an **average** squad loss of 10.05 on $x + \delta$ does not imply the probability of the target model correctly classifying **specific** example is at most $2.75 \times 10^{-5}$.
2. My second concern is still not addressed. First of all, $F'(x+\delta) \geq F(x+\delta)$ is not true for all samples over $p(x)$. Besides, my question is why the **gradient** of the target model, $\nabla F^{\prime}(x + \delta)$, disappears from the fourth to the fifth line in Equation (12). The only explanation in my understanding is that you apply the integration by parts by letting $u = F(x + \delta)$ and $v =\nabla \log F(x + \delta)$, then $\int v du = uv - \int u dv$. However, $(F(x+\delta))^{\prime} = F'(x+\delta) [i]$ is totally wrong as $F'$ is the target model rather than the gradient of the proxy model $F$.
3. The comparison results with [11, 41] are promising, demonstrating the effectiveness of optimizing the gradient norm around $x + \delta + \Delta$ rather than $x +\delta$. Does the effectiveness and contribution of TPA come from [11, 41] + SAM[1]?
> [1] Sharpness-aware minimization for efficiently improving generalization.
Based on the above concerns, I still keep my original score at the current phase.
---
Rebuttal 2:
Title: Response (1/2)
Comment: We appreciate your feedback. We would like to provide further clarification as we sense some misunderstandings here. We also look forward to your reply and hope that this addresses your concerns.
---
**Question:** Our bound.
**Response:**
Notice that the expected square loss of the generated adversarial examples on the target model (DenseNet121) is $10.50^2$.
In statistical terms, while individual sample's squared loss may vary around this expected value, we expect that the majority of samples will exhibit comparable losses.
To be more specific, we empirically evaluate the variance of the squared losses to be 54.62 (out of 1000 adversarial examples generated by our attack).
According to Chebyshev's inequality, at least 96\% of the samples should lie within 5 standard deviations of the expected value.
Within this range, the squared losses of our generated adversarial examples should be at least $10.50^2 - 5 \times \sqrt{54.62} = 73.30 \approx 8.56^2$.
This implies that our generated adversarial samples incur a loss greater than 8.56 on DenseNet121 with at least 95\% probability.
A loss of 8.56 indicates that the probability of correct classification by DenseNet121 for these samples is approximately 0.0002.
In other words, out of every hundred samples, at least 95 can effectively mislead the target model.
Doesn't this high probability of successfully attacking the target model sufficiently demonstrate the effectiveness and practicality of our bound?
---
**Question:** The effectiveness of TPA.
**Response:**
As stated in our introduction, inspiring works such as [1, 11, 41] prompt us to investigate the theoretical relationship between flatness and the transferability of adversarial examples.
Our theory suggests that penalizing the first and second-order gradients of generated adversarial examples can effectively enhance their transferability.
Notably, prior works [1, 11, 41] did not include penalties on second-order gradient.
Our proposed method is simple yet quite effective, which can generate more transferable adversarial examples by penalizing both one-second and second-order gradients via additional noise.
We acknowledge the contributions made by these existing works in our introduction and clarify the distinctions between our method and theirs.
---
Rebuttal 3:
Title: Response(2/2)
Comment: **Question:** The assumption about $F'(x+\delta) \geq F(x+\delta)$.
**Response:**
Regarding the second concern, there are some typos in the appendix of the original paper. Let us consider the following derivation:
$$ \sum \int p(x) \Vert \delta \Vert _2^2 \nabla F'(x+\delta)[i] \cdot \nabla \log F(x+\delta)[i] dx $$
$$ = \sum p(x) \Vert \delta \Vert_2^2 F'(x+\delta) \cdot \nabla \log F(x+\delta)[i] |_a^b - \sum \int p(x) \Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx $$
$$ = - \sum \int p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx \geq - \sum \int p(x) \Vert \delta \Vert_2^2 F(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx,$$
where $a=+\infty, b=-\infty$.
The second equality utilizes integration by parts, the third equality uses $p(\infty)=0$ and the fourth equality leverages $F'(x+\delta) \geq F(x+\delta)$ and the fact that the second derivative of $\log$ is negative.
This expression indeed derives $\int p(x) \Vert \delta \Vert_2^2 \nabla \log F'(x+\delta)^\top \nabla \log F(x+\delta) dx \geq - \sum_{i} \int p(x) \Vert \delta \Vert_2^2 F(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx.$
We rely on $F'(x+\delta) \geq F(x+\delta)$, a generally valid assumption.
$\delta$ typically maximizes the loss function on the proxy model, implying $F'(x+\delta)$ optimally suits the proxy, i.e., $F'(x+\delta) \geq F(x+\delta)$.
Moreover, we conduct experiments using BIM and our method to generate adversarial examples for 1000 natural samples on ResNet50.
For DenseNet121, EfficientNet, VGG19, ConvNet, and Vision Transformer (ViT), all correctly predict these generated adversarial examples with higher confidence compared to the proxy model, meaning they assign a higher prediction probability to the ground-truth class.
Specifically, in the presence of adversarial examples generated by our attack, DenseNet121, EfficientNet, VGG19, ConvNet, and ViT have correct prediction probabilities that are 61 times, 197 times, 406 times, 1058 times, and 3674 times higher, respectively, than those of the proxy model ResNet50 (note that since the proxy model's correct prediction probability for adversarial examples is typically very tiny, such as around $10^{-7}$, the target models are still easily misled by these adversarial examples).
Thus, assuming $F'(x+\delta) \geq F(x+\delta)$ in transfer attack scenarios is justified.
Furthermore, even if we consider the existence of peculiar samples satisfying $F'(x+\delta) > F(x+\delta)$, there is no need to investigate these further because $F'(x+\delta)>F(x+\delta)$ already suggests that $x+\delta$ can trick the target model (since adversarial examples generated on the proxy model often mislead the proxy itself).
In other words, transfer-based attacks are meaningful only if $F'(x+\delta) \geq F(x+\delta)$.
For theoretical rigor, we can adjust our analysis as follows:
$$ - \sum \int p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx = - \sum \int_{U_1} p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx - \sum \int_{U_2} p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx$$
$$
\geq - \sum \int_{U_1} p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx \geq - \sum \int_{U_1} p(x)\Vert \delta \Vert_2^2 F(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx.
$$
Here, we split the entire integration domain into two non-overlapping parts: $U_1$ where $F'(x+\delta) \geq F(x+\delta)$ and $U_2$ where $F'(x+\delta) < F(x+\delta)$.
Due to $- \sum \int_{U_2} p(x)\Vert \delta \Vert_2^2 F'(x+\delta) \nabla^2 \log F(x+\delta)[i,i] dx \geq 0$, we can disregard this term.
This adjustment does not compromise the integrity of our theory; it merely restricts the integration interval to $U_1$.
We hope this can address your concerns.
---
Rebuttal 4:
Title: After the rebuttals
Comment: Thank the authors for your rebuttals carefully. My concerns about unclear details and experiments are well addressed. Thus, I would like to raise my final score. Besides, the assumptions must be explicitly presented and explained in the final version. | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere appreciation for the efforts and feedback from all reviewers. We have taken into account reviewers' comments and suggestions, which have greatly enriched the quality of this manuscript.
As noted by some reviewers, there are minor errors and ambiguities in our proof. We have fixed these issues to enhance the manuscript's readability. **Importantly, these revisions do not affect the core insights and contributions of this manuscript.**
This analysis is grounded in practical assumptions and unveils a nuanced relationship between the transferability of adversarial examples and their flatness. We believe that this insight will provide the community with a deeper understanding of transferability, paving the way for future research.
Moreover, we have incorporated reviewers' constructive suggestions regarding experiments and other relevant interesting literature.
Overall, the reviewers' expertise and constructive feedback have significantly enhanced the clarity and depth of this manuscript. We believe that the revised manuscript now presents a more convincing and compelling study, and we once again extend our heartfelt gratitude to the reviewers for their efforts and feedback. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability | Accept (poster) | Summary: This paper presents GEMCODE, the first co-crystal design AI pipeline, which consists of four components:
- SMILES-based models for coformer generation.
- Classification models for co-crystal property prediction.
- An evolutionary algorithm for coformer optimization.
- A GNN for prediction of the probability of co-crystal formation.
In addition, experiments are carried out on each component and the entire pipeline.
Strengths: This paper is the first to introduce generative AI into co-crystal design, which is a very important topic for drug development and other fields. Specifically, this paper establishes a complete framework for co-crystal design, including datasets, property prediction, coformer generation, optimization, and validation, which can be recognized as the initial baseline in this field. In addition, the experimental results are detailed and the logic is clear.
Weaknesses: 1. The experimental results of property prediction show that the three mechanical properties are inherently elusive, because the accuracy and F1 score of most models in Figure 2 are less than 0.8, which is not ideal for binary classification tasks. Therefore, my concern is that this paper is a pioneer in both the property prediction and generation/optimization of co-crystals, and will the defects (or biases) of the property prediction models be brought into the generation/optimization?
2. In GEMCODE, coformer de novo generation and optimization are divided into two steps, which I think is to follow the tradition in the field of drug discovery. However, in drug discovery, the main purpose of de novo generation is to find molecular candidates whose "main properties" (such as docking scores) are satisfactory, while the main purpose of optimization is to optimize some other properties (such as toxicity and solubility) while maintaining the "main properties" (usually by a similarity constraint). The property objectives of de novo generation and optimization in GEMCODE is the same (without any similarity constraint), which makes me doubt the rationality of the pipeline.
3. Admittedly, I am not familiar with the co-crystal's chemical background, so it is difficult for me to evaluate GEMCODE's chemical validity. Actually, I also did not find any paper on co-crystals in top conferences on machine learning, which makes me question this paper's suitability for publication at this conference. Overall, the main contribution of this paper is to model the computational pipeline of co-crystal design, and to apply existing AI techniques to it. Therefore, in terms of style, I think the chemistry and cheminformatics community may be more appropriate for this paper than a machine learning conference.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Table 1, the "diversity of targets" are all greater than 0.9. I'm not sure if they are values for internal diversity, and if they are, the diversity values are quite large for sets containing hundreds of molecules. From a molecular design perspective, this suggests that the design objective may be so simple that many very dissimilar molecules fit it.
2. In the paper, the tabletability of co-crystals is presented as a target to be improved. However, in GEMCODE, the three properties related to tabletability are all binary variables, so for a coformer molecule, its tabletability is also represented by one binary variable. Therefore, is the word "enhance" inappropriate for tabletability?
3. Of the three coformer molecules generated in Table 3, two contain ions. I'm not sure if ions are common in coformer molecules, but I think they're not common in the ChEMBL database. So, is there a significant distribution difference between the pre-trained and fine-tuned datasets?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Nothing beyond what has already been stated in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's useful comments and suggestions!
Below, we would like to provide our __answers to the questions__:
1. We are happy to provide a clarification for the high values of "Diversity of target" in Table 1. The diversity values were calculated for the molecules that have been pre-screened for novelty (meaning they are not present in the training dataset), validity, and matching mechanical properties. Therefore, a diversity value exceeding 0.9 appears to be reasonable. Furthermore, the formation of co-crystals can involve molecules with varying chemical structures. Consequently, GEMCODE should have the capability to accommodate diverse chemical structures. This is achieved through hybridization of generative neural networks with evolutionary optimization.
2. Regarding tabletability, we believe that creating a coformer molecule with a target tabletability profile (i.e., the target mechanical properties) can be seen as an enhancement in the tabletability of the co-crystal. However, we are open to adjust the wording if the reviewer suggests a more accurate term.
3. To observe the difference between the pretraining and the fine-tuning datasets, please refer to the t-SNE visualization in Figure 5b of Appendix D.2. The figure shows that the coformer dataset produces a more concentrated set of molecules compared to ChEMBL. The list of coformers was sourced from [1], which was also used by the CCGNet model for predicting co-crystallization probabilities. By integrating CCGNet and training all the other components of our pipeline on the same data, we show consistency of the GEMCODE design. The presence of ionized forms in Table 3 arises from SMILES representations in the coformer dataset indicating charge distribution within _neutral_ molecules (e.g., "Nc1nc2ccc(N+[O-])cc2s1" with a nitro group, "CSCCC([NH3+])C(=O)[O-]" with carboxyl and amino groups). Consequently, the model has learned to generate charged molecules (e.g., with carboxyl "C(=O)[O-]") in certain instances. We are expanding the scope of GEMCODE beyond co-crystals to encompass other crystalline forms, such as salts. This expansion will aim to specifically address model biases related to assigning charged molecules to co-crystals.
We would like to also __comment on the weaknesses__ outlined by the reviewer:
1. We thank the reviewer for highlighting a key limitation in our study. To our knowledge, we are the first to predict mechanical properties of organic co-crystals. Therefore, our work sets the state of the art for the problem. We have run extensive evaluations of various machine learning models incorporating a range of descriptors striving for better performance. We are confident that the capability of predictive models is limited by the training data. As more data of sufficient quality becomes available, we will refine GEMCODE accordingly. Notably, despite the current limitations, several validation cases presented in this work demonstrate the capability of GEMCODE to successfully predict new co-crystals, experimentally validated and reported in the literature.
2. The main motivation for incorporating evolutionary optimization in GEMCODE was to address the aforementioned data limitations. Our dataset consists of approximately 6000 coformers. Training generative neural networks on a dataset of this size may lead to a restricted diversity in the generated molecules. By implementing evolutionary optimization as a separate step in the pipeline, we overcome this limitation by design, as the evolutionary algorithms operate independently of the training data. Furthermore, evolutionary optimization can help mitigate the drawbacks of the machine learning models discussed earlier. The optimization process focuses on enhancing the likelihood of coformers possessing all the necessary mechanical properties. As a result, a molecule generated by the neural networks can be further refined through evolutionary optimization in terms of this likelihood. In such a scenario, the evolutionary optimization step is complementary to the initial generation and demonstrates the advantage of the hybridization approach we presented. Finally, evolutionary optimization is designed to retain the coformers with the highest likelihood on each iteration. In other words, it is guaranteed to improve the mechanical properties of coformers while adding diversity to the pool of the generated molecules.
3. We are confident that our paper aligns well with the NeurIPS guidelines being an _application_ of _machine learning for sciences_ (https://neurips.cc/Conferences/2024/CallForPapers). While co-crystals may not have been a prevalent topic at A* conferences, we see this as an opportunity to expand the horizons of machine learning research and engage with researchers from other fields. Notably, a recent study in this direction was presented for the first time at the ICML 2024 workshop [2]. This only highlights the originality and significance of our work.
In conclusion, we would like to quote the reviewer, _“this paper is a pioneer in both the property prediction and generation/optimization of co-crystals”_. Although this was expressed as a concern, we would like to point out that this quote, in fact, is highlighting a great scientific achievement. Provided that we have sufficiently addressed all the comments and questions, we kindly ask the reviewer to consider increasing the rating to 6. Given that we have initially received the ratings of 8/6/4/4, this would significantly increase our chances to get accepted.
__References:__
[1] Jiang, Y., Yang, Z., Guo, J., Li, H., Liu, Y., Guo, Y., ... & Pu, X. (2021). Coupling complementary strategy to flexible graph neural network for quick discovery of coformer in diverse co-crystal materials. Nature Communications, 12(1), 5950.
[2] Birolo, R., Özçelik, R., Aramini, A., Gobetto, R., Chierotti, M. R., & Grisoni, F. (2024). Deep Supramolecular Language Processing for Co-crystal Prediction.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal and for addressing the concerns raised. I decide to raise my score up to 5. | Summary: This works presents a generative framework for co-crystal design which uses deep learning and evolutionary algorithms for optimization. The GEMCODE pipeline can be used to select optimal molecular pair combinations: an active pharmaceutical, and a coformer to control for the desired co-crystal properties. The work is an application of generative AI (specifically, GANs and VAEs) to an interesting and important problem in the pharmaceutical industry. The paper has great figures and is well-written, and the code appears well-documented. Nevertheless, the paper could improve from some slight re-structuring and/or re-writing to improve the clarity of the paper, as it was tough to follow in places (there are many molecular representations and model architectures used, many target properties, these could be better organized so that it is easier to follow). But overall I really enjoyed reading it and would like to congratulate the authors on the nice work.
Strengths: * The work makes extensive use of existing frameworks (e.g., CCGNet) and datasets (e.g., ChEMBL, CSD) where possible, which is great, while still exploring the development of new tools when it comes to the generative aspect.
* Overall the study is very thorough - many of the questions I found myself having while reading the paper were either answered later on in the paper or in the appendix. Those which were not, I have written below.
* Good details provided in the appendix for the data and many of the methods, it seems it should be reproducible from the details provided herein (as well as the linked code).
* This paper highlights an important application of generative AI to a new domain.
Weaknesses: * It was a good choice, nonetheless, to explore non-neural models for the prediction of mechanical properties, which is where the authors have the fewest data points (6K). However, given the limited data on coformers (about 7K co-crystals), I also would have expected a simpler baseline like a random forest for estimating likelihood of co-crystal formation.
* Clarity could be improved in places, since the authors are working with lots of different representations and models throughout their pipeline. Perhaps this could be better visualized in a way that gives a quick overview of the data type, number of data points, and model architecture, for each model included in the pipeline.
* The model may have benefited from a more advanced hyperparameter tuning for the generation of coformers (e.g., Optuna rather than grid search), as the models and their final performance can be quite sensitive to the selected hyperparameters. For instance, %validity >99% should be attainable in all cases for the molecular generative models presented here, to which the models were close but not all quite there.
* Some of the notation is not well-explained. For instance, in the target coformers equation, what is S? In many of the tables and figures, the axes labels or certain notation is implied, which may be mentioned somewhere else in the paper, but it would improve readability if readers could have a reminder of what that property is (and if it is bounded, a percentage, etc). Also, where there are error bars, it should be stated what these are (e.g., in tables, figures). It is a long paper so this would be appreciated, I found myself flipping back and forth a lot.
* To better understand the diversity of the chemical space spanned by generated molecules, it would have been interesting if the authors better quantified that of the training set as well, and if different training/testing strategies could better assess the generalizability to new coformers rather than simply splitting the data. Were not certain classes of coformers more represented than others?
* Given that the authors want others to use their pipeline to discover new coformers, it would have been great to quantify the coverage of chemical space of the model by metrics other than % validity and % novelty. Did the authors consider such metrics (or even dimensionality reduction visualization/techniques) to get a better sense of the chemical space coverage?
Technical Quality: 3
Clarity: 4
Questions for Authors: * When predicting co-crystal formation, do the authors also have any negative pairs, e.g., molecules that definitely do not co-crystalize? I would assume the bias is towards positive pairs, which perhaps leads to overestimation of the likelihood of co-crystallization for any new pair of molecules, but this was not fully clear to me.
* In the validation case studies (section 5.4), it is unclear if molecules with overlap or significant molecular similarity to nicorandil, rivaroxaban, and paracetamol were excluded from the training set before using the model for these tasks, otherwise there may be data leakage and the conclusions are not so meaningful.
* I see that molecular descriptors (I am guessing 2D?) were used for representing the coformers for mechanical property prediction. Were the fingerprints also explored here, to less effectiveness? Curious since it seems fingerprints were used in other parts of the pipeline, and not sure if descriptors were necessary for good performance here or not.
* Related to the above question, how important is 3D information? I would expect a lot, and wondering if the authors considered including 3D atomic-level descriptors as well in any of their models.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: * In addition to the final median probability scores attained by each model for the molecular optimization tasks explored in this study, it would be relevant and interesting to report the sample efficiency. This is a much more relevant metric in this context of molecular optimization, rather than the final probability score, as it gives a measure of how quickly a model learns (e.g., how many oracle calls are needed to obtain a specific score). Recommend to include an estimate of the sample efficiency in a revised version.
* Is the “median probability score” not in fact a likelihood of sampling specific coformers (and not really a probability)? If so (or if not), I think this could be presented more clearly, as what exactly this metric is was not clear to me. It was probably defined somewhere in the paper but I could not find it easily.
* One big limitation is that it would have been relevant to compare the models for the coformer optimization tasks to a simple baseline, such as a “virtual screening” of the coformers. The advantage of using the generative model presented herein should be that better molecules, and thus better scores, should be achievable with the generative model than by simply screening the database (given a fixed sample budget), but this has not been demonstrated in the current study. It would be a pretty simple/standard baseline since the reference database is small (~7K coformers).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the high rating and very valuable comments! We appreciate the detailed feedback on where we can improve the clarity of the manuscript. We will do so in the camera-ready submission.
Below, we provide our __answers to the questions__:
1. For the purpose of ranking molecular pairs according to the probability of co-crystallization, we employed the Co-Crystal Graph Network (CCGNet) model [1]. The authors used a dataset of 6819 positive and 1052 negative samples to train the model, so there is indeed an imbalance problem. Nevertheless, CCGNet was shown to achieve high accuracy on negative samples (97.26%) thanks to the combined use of graph representations of the underlying GNN and 12 molecular descriptors.
2. In the validation case studies (Section 5.4), we predicted three co-crystal systems that were not present in the training dataset. By demonstrating that GEMCODE can generate novel co-crystal structures reported in the literature as experimentally validated, we aimed to showcase its strong predictive capabilities. We understand the reviewer's concern about data leakage and would like to clarify that we ensured that none of these co-crystal systems were included in the training set prior to the validation process.
3. In our study, we indeed utilized a variety of descriptors to predict mechanical properties, encompassing molecular fingerprints of various types (Morgan, MACCS) and lengths (166, 512, 1024, 2048), molecular descriptors sourced from different origins (RDKit, Mordred, PaDEL), and 3D descriptors generated from RDKit (Autocorr3D, MORSE, PMI, etc.). We found that a set of molecular descriptors with physicochemical properties (29, 24, and 30 features for unobstructed planes, orthogonal planes, and hydrogen bonding, respectively) resulted in the best predictive performance.
We would like to also __comment on the weaknesses__ outlined by the reviewer:
1. We appreciate the suggestion to consider using more advanced methods like Optuna for hyperparameter optimization. While we acknowledge that Optuna may provide more efficient tuning compared to grid search, we would like to highlight that grid search is a pragmatic choice for a moderate number of experiments (10-20), delivering acceptable results. In this work, we had to investigate and optimize numerous configurations of the pipeline. Given the complexity and time constraints associated with more advanced methods, we believe that our current approach is a tradeoff between performance and computational cost. We leave the more advanced methods for GEMCODE hyperparameter optimization for the future work.
2. To our knowledge, there is no common stratification strategy for splitting the data in this domain. Another study on co-crystals accepted to ICML 2024 also employed a standard random split methodology [2]. Based on our empirical results, we do not expect any significant change in results for alternative data splits.
In addition, we would like to __comment on the limitations__ pointed out by the reviewer:
1. Regarding the "median probability score", we believe it can be best described as the median probability of assigning coformers to a positive class for each of the mechanical properties. In other words, median probability score gives an idea about the central tendency of the model's confidence in predicting a particular mechanical property. This is, of course, related to the likelihood but reflects a different aspect of the model behavior. We will make sure to clarify this further in the camera-ready version of the manuscript.
2. Table 1 illustrates the comparison of generative models focusing on “Target сoformers”. The percentage indicated in this row represents the ability of the model to generate new coformers with the desired co-crystal properties that were not present in the training dataset. We investigated that, for the Theophylline drug, the percent of target coformers (satisfying all three mechanical properties) in the training set was only 8.46%. Generating candidate coformers for this drug with T-CVAE only, we obtained 6.52% of target coformers. This clearly demonstrates how GEMCODE is capable of discovering previously unknown co-crystal systems. We appreciate the suggestion to illustrate how probability distributions of mechanical properties compare between the training and the generated data. We will add this evaluation to the camera-ready submission.
__References:__
[1] Jiang, Y., Yang, Z., Guo, J., Li, H., Liu, Y., Guo, Y., ... & Pu, X. (2021). Coupling complementary strategy to flexible graph neural network for quick discovery of coformer in diverse co-crystal materials. Nature Communications, 12(1), 5950.
[2] Birolo, R., Özçelik, R., Aramini, A., Gobetto, R., Chierotti, M. R., & Grisoni, F. (2024). Deep Supramolecular Language Processing for Co-crystal Prediction (https://openreview.net/forum?id=bQ9d2hzjW4¬eId=9SoErgR0kb).
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you to the authors for the detailed response and for the clarifications.
One thing that I disagree with still is the comment that
> To our knowledge, there is no common stratification strategy for splitting the data in this domain... Based on our empirical results, we do not expect any significant change in results for alternative data splits.
I do not think this is something that should be swept under the rug. Doing a random split is a standard way of assessing in-distribution generalization, but can you think of other ways to split the data to asses out-of-distribution generalization? I think there are a few interesting experiments you could do, which are not difficult, that would add immensely to the value and utillity of the work. These are also standard experiments to run in applied ML papers.
Related to the above point, there also seems to in general be a lot of confusion about what properties/features are spanned by the data (looking at the other reviewer comments as well as mine), which I think could be better clarified.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's prompt reply and the chance to provide additional clarifications. We also fully agree that one should not neglect the importance of data splitting and sufficient empirical evidence supporting such study design choices should be provided. We are aware of molecular scaffolding as a method commonly used in drug design to evaluate out-of-distribution generalization capability of predictive models. However, adapting this method to our co-crystal dataset presents several challenges. First, each sample consists of two coformer molecules with different scaffolds (https://anonymous.4open.science/r/GEMCODE/rebuttal/cocrystals.png). Second, the coformers can exhibit a variety of structures, resulting in a large number of selected scaffolds for analysis (approximately 1000 scaffolds based on our preliminary experiments using the Murcko decomposition method). In contrast, in drug design applications (https://greglandrum.github.io/rdkit-blog/posts/2024-05-31-scaffold-splits-and-murcko-scaffolds1.html), researchers typically work with specific compound classes where it is feasible to identify a relatively small number of scaffolds (10-20). Therefore, given the moderate size of the co-crystal dataset, scaffolding might not produce statistically significant evaluations.
At the moment, we identify another approach as the most promising to further shed light on out-of-distribution generalization. In essence, the data can be split based on Tanimoto Similarity, maximizing dissimilarity between the molecules of different subsets. We are going to conduct such experiments in-depth and include the key findings to the Appendix of the camera-ready submission. We are thankful to the reviewer for rigorous evaluation of our work and attention to detail. | Summary: This paper presents GEMCODE, a novel pipeline for generating co-crystal designs with enhanced tabletability properties for pharmaceutical applications. The authors combine deep generative models, evolutionary optimization, and machine learning to create and evaluate potential co-former molecules. They train models to predict mechanical properties of co-crystals and generate co-former candidates using various approaches, including GAN, transformer-based VAE and CVAE architectures. The generated candidates are then optimized using evolutionary algorithms and ranked based on co-crystallization probability. The authors report that their T-CVAE model produced the highest percentage of co-formers with desired tabletability profiles. They validate their approach by generating experimentally confirmed co-formers for drugs like Nicorandil, Rivaroxaban, and Paracetamol. While the results appear promising, the authors acknowledge limitations such as potential bias in property predictions and the need for more comprehensive experimental validation. They also explore the use of language models for co-former generation, finding potential but noting the need for further optimization to achieve competitive performance.
Strengths: I appreciate the comprehensiveness of the experimental evaluation in this paper. The authors have conducted a thorough examination of GEMCODE's performance across multiple dimensions. They systematically compared three different generative models (GAN, T-VAE, T-CVAE) using various metrics such as validity, novelty, and percentage of target co-formers generated. While the described methods are not new (models for property prediction, co-former generation, and co-crystal prediction), this is a novel task that the authors have decided to tackle with GEMCODE.
The authors evaluated the effectiveness of machine learning models for predicting mechanical properties of co-crystals, comparing performance before and after feature engineering. They assessed the impact of evolutionary optimization on the generated co-formers, providing statistical analysis of improvements in desired properties. The pipeline was validated using real-world case studies with known drugs, demonstrating its ability to generate experimentally confirmed co-formers.
Weaknesses: I recommend adding these references in the introduction and related works, as they are relevant to the manuscript [1, 2, 3]. In table 1 and 2, arrows indicating the direction of optimization for the metrics would be helpful.
While the authors provide some validation using known co-formers, there is a lack of comprehensive experimental testing of the novel co-formers generated by GEMCODE. Synthesis and physical testing of predicted co-crystals would significantly strengthen the claims about the pipeline's effectiveness; or at least some theoretical ab initio results on the validation More details on the validation experiments are needed, in particular how the therapeutic molecules were selected.
[1] Dollar, Orion, et al. "Attention-based generative models for de novo molecular design." Chemical Science 12.24 (2021): 8362-8372.
[2] Jensen, Jan H. "A graph-based genetic algorithm and generative model/Monte Carlo tree search for the exploration of chemical space." Chemical science 10.12 (2019): 3567-3572
[3] Tripp, Austin, and José Miguel Hernández-Lobato. "Genetic algorithms are strong baselines for molecule generation." arXiv preprint arXiv:2310.09267 (2023).
Technical Quality: 3
Clarity: 4
Questions for Authors: Can the authors confirm if the validation experimentation generated co-formers are not present in the training set (the pre-training CHEMBL set or the fine-tuning co-former set)? Are the therapeutics chosen in the validation experiments also not present in any of the datasets (the co-former set in particular)?
The validation experiment needs to be expanded, as the results here would determine the utility of the entire GEMCODE pipeline. While the individual parts of the pipieline work, the validation experiments show that the co-crystal designs are real. Are there any fitness functions or ab initio results that can demonstrate the effectiveness of GEMCODE in generating co-crystals with novel therapeutics?
The state-of-the-art molecular generation algorithms [1], particularly for drug design tasks, are evolutionary. Rather than using language models, it seems more likely that a genetic algorithm would generate better structures than the GAN/VAEs/LLMs used here.
[1] Tripp, Austin, and José Miguel Hernández-Lobato. "Genetic algorithms are strong baselines for molecule generation." arXiv preprint arXiv:2310.09267 (2023).
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors acknowledge a bias in predicting the absence of orthogonal planes, which could limit the pipeline's effectiveness. More work is needed to address this imbalance in the training data or model architecture. Furthermore, while tabletability is important, the narrow focus on this property overlooks other crucial aspects of pharmaceutical co-crystals such as solubility and bioavailability. This limits the utility of the pipeline for drug development. The authors acknowledge these issues in the limitations and conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful evaluation of our work and for the valuable feedback! We will certainly revise and address all the comments and suggestions in the camera-ready version of the paper and our future work.
Below, we provide __answers to the key questions and comments__:
1. We appreciate the suggestion to include references to additional articles on molecular generative design. We acknowledge the importance of providing a comprehensive overview of existing works in the field and will incorporate the suggested references in the Introduction and Related works sections.
2. We thank the reviewer for pointing out the need for arrows in Table 1 and Table 2 to simplify reading the metrics. We will make sure to incorporate this suggestion in the camera-ready submission.
3. We are currently focused on further validation of GEMCODE through experimental studies. To date, we have extensively validated each component of the pipeline as well as the predicted coformers of the case studies. The predicted coformers for Nicorandil, Rivaroxaban and Paracetamol (Section 5.4) were not present in the training data but these co-crystals are already experimentally confirmed, as reported in the literature. These cases provide strong evidence for the effectiveness of our pipeline.
4. For testing the pipeline, we selected those drugs reported to fail in forming tablets through direct pressing. Finding coformers for such drugs with GEMCODE was, in our opinion, the strongest evidence for the utility of the pipeline. We emphasize that our training data did not include any co-crystals related to Nicorandil and Rivaroxaban, but only systems of Paracetamol due to its widespread use. Therefore, Nicorandil and Rivaroxaban were entirely new to GEMCODE. This demonstrates the pipeline's ability to predict new coformers for both, new drugs and those contained in the training data.
5. We agree that many SOTA drug design approaches use evolutionary algorithms as optimizers (e.g. [1]). For this reason, we integrate them as part of our pipeline. However, the results of evolutionary optimization for co-crystals are highly dependent on the quality of the initial solution pool [2]. Furthermore, it may be too computationally expensive to wait for convergence starting from a random population. At the same time, restricting initial population to the existing database negatively affects the diversity of predicted solutions. Similar issues arise for other widely used molecular optimisation approaches. Therefore, we opted for a hybrid approach combining the strengths of design approaches of different nature.
6. To cope with the problem of unbalanced data, we experimented with multiple approaches on the data level (oversampling, undersampling and others) and on the model level (application of other models, adjustment of weights and others). The best metrics were achieved with the approach described in the paper using the threshold for the probability of the positive class. However, we plan to continue working towards improving the performance of the model in predicting orthogonal planes.
7. We appreciate the reviewer's valuable feedback regarding the importance of incorporating additional properties into our pipeline. In fact, we are actively working on extending our predictions to include solubility, tendency to form crystalline hydrates, and other properties of co-crystals. We are developing GEMCODE as an open source project, so we will release these updates as soon as they are sufficiently tested and validated. We believe that these enhancements will further improve the utility and applicability of our pipeline in the pharmaceutical and other relevant domains.
We would like to ask the reviewer to consider raising the score to 7, due to the novelty and importance of the application problem that GEMCODE solves. As far as we know, no paper on this topic has been presented at A* conferences yet (except for the workshop at ICML 2024 [3]). We have done substantial work on GEMCODE development and now have a chance to pioneer the application of generative AI to organic co-crystal design. We will continue to work on GEMCODE (including experimental validation) and will definitely take into account the comments of all reviewers.
__References:__
[1] Ye Z. H. et al. Searching new cocrystal structures of CL-20 and HMX via evolutionary algorithm and machine learning potential //Journal of Materials Informatics. – 2024. – Т. 4. – №. 2.
[2] O'Connor, D. (2023). (Co-) Crystal Structure Prediction With Machine Learned Potentials (Doctoral dissertation, Carnegie Mellon University).
[3] Birolo, R., Özçelik, R., Aramini, A., Gobetto, R., Chierotti, M. R., & Grisoni, F. (2024). Deep Supramolecular Language Processing for Co-crystal Prediction (https://openreview.net/forum?id=bQ9d2hzjW4¬eId=9SoErgR0kb).
---
Rebuttal 2:
Comment: I thank the authors for their comprehensive rebuttal. The work done here is comprehensive and well motivated. And the author response has also sufficiently addressed my concerns.
I believe the inclusion of some sort of theoretical validation of any future novel co-crystals would be highly valuable, if at all possible. Experimental validation is expensive, and the authors have demonstrated that previously reported literature results not in the validation set confirm that GEMCODE is effective. When open-sourced, having some sort of way to quickly provide theoretical estimates would be very useful for potential users; almost providing a virtual screening sort of pipeline. Perhaps even a regression model fitted to the available data to act as a proxy.
I am willing to **increase review score to 7**.
---
Rebuttal Comment 2.1:
Comment: We are very grateful to the reviewer for consideration and the decision to increase the score!
In the author console, we still see the rating of 6 though. It could be due to a technical issue with OpenReview. Please make sure to have edited and submitted the updated rating in the original review ("Edit" > "Official Review" > ... > "Submit"), such that it is reflected in the average rating of our work. Thank you! | Summary: The authors investigate an interesting chemical problem of generating coformers given an organic molecule such that they would form co-crystals with desirable chemical properties. The authors use GAN/VAE-based methods to generate SMILES of potential coformers, which are then improved by evolutionary optimization, before finally predicting the probability of co-crystallization via a GNN. Traditional statistical models appear to be able to predict certain mechanical properties of the co-crystals given the pairs of SMILES. The GAN/VAE models are able to generate valid molecules, where ~40-80% are predicted to crystallize.
Strengths: *Originality & Significance* The manuscript tackles a very interesting and impactful problem in pharmaceutical production, and overall a long-standing challenge in crystal engineering. To my knowledge, this is one of the first efforts to use generative ML to design co-formers. If completely new crystallization agents/co-formers can indeed be reliably produced, it would be a very useful tool.
*Quality*: In general, the approach taken here makes sense.
*Clarity*: The paper is very clearly written, and the presentation is easy to follow.
Weaknesses: 1. As with any ML for science work, the devil is in the details of the application. There are several big problems I see:
- Given that co-crystals are known to also have a variety of polymorphs, predicting mechanical properties based on SMILES without predicting/knowing the exact crystal structure inherently is not a sound approach (which polymorph's property is predicted?).
- While I agree the overall crystal contacts/packing dictate mechanical properties (ignoring defects), I do not see sufficient justification as to why predicting unobstructed planes/orthogonal planes/H-bonds bridging is a strong proxy for plasticity of co-crystals, let alone tabletability (there are just so many different types of possible interactions and/or indicators). The citations there are at best weak and do not support the claims.
- The definition of validity/novelty is very relaxed. Validity that is 'this molecule has the right valence' does not rule out thermodynamically infeasible molecules (at least, QED/SAScore should both be reported; otherwise, by definition, you can use SELFIES to get 100% valid molecules with every method). Novelty should not be 'how many molecules are not exactly the same as the training set' but rather a histogram of the closest Tanimoto similarity of the generated molecules to the training set (because a molecule can differ by 1 atom and this metric would still count it as novel). Duplicates are better represented by diversity (histogram of Tanimoto similarity between the generated molecules). Sec 5.4 and Table 4 show very similar/simple aliphatic carboxylic acids. This worries me that the molecules produced lack diversity and are not novel.
- It is very hard to validate the results without wet lab results. The predicted probability of co-crystallization and predicted mechanical properties are unfortunately not verifiable outside of models trained here unless structures are known. While I fully agree that experimental validation should not appear in NeurIPS, I fear that 'generating co-crystals with good tableability' cannot be claimed unless there are wet lab results.
2. The ML approaches taken here are not novel, and to an extent, questionable. If the authors developed a crystallization/property predictor (which produces a combination of scores), I feel the typical optimization approaches (e.g. reinvent, or at least the suite of software in Guacamol) should be at least used as baselines, and otherwise it is hard to justify the usefulness of VAE/GAN; the comparisons against GPT-2 models are much less relevant.
Note: citations on the chemistry side can be significantly improved (e.g. all of the citations 1-6 are hardly relevant, e.g. there are plenty of impactful reviews for charge transfer co-crystals and their applications). I also disagree with the current screening for co-crystals 'focus on rather narrow classes of candidate compounds' (especially when the demonstrated results in the paper are all aliphatic carboxylic acids).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I note that in Sec 5.1, it says the data were randomly split. As we know, co-crystal molecules typically have limited diversity, can the authors elaborate how the predictions generalize out of distribution (e.g. by scaffold splitting)?
2. Can the author produce novelty/diversity of generated samples as distribution of Tanimoto similarities?
3. Could you provide CCDC access code (as is tradition) for Table 3?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing a very valuable feedback!
We believe that some of the criticism was caused by a misunderstanding.
We would like to first __comment on the weaknesses__ outlined by the reviewer:
1. Polymorphism is certainly an important factor in the design of co-crystals as it can influence their physicochemical properties [1]. However, the percentage of polymorphs in the CCDC data does not exceed 5% of the total number of co-crystals. Thus, we can assume that their influence on the accuracy of the model in predicting mechanical properties is insignificant.
2. In Appendix C.2, we attempted to summarize the relationship of the predicted mechanical properties to the plasticity and tabletability of the co-crystal. Supporting evidence is given by the paper by Bryant et al [2]. This work was published in CrystEngComm, a top-5 domain journal (https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=chm_crystallographystructuralchemistry). So, we respectfully disagree that this is not a sufficient proof of the relationship. Both the publication and the venue are well respected in the crystallographic community.
3. We agree with the reviewer's assertion regarding the limitations of relying solely on validity. As suggested by the reviewer, we refined our molecule selection by incorporating the SA score. In Appendix F.5, we mention a threshold of SA ≤ 3, resulting in an average coformer synthesizability value of 2.06.
4. The reviewer's concern regarding the high similarity of the molecules in Table 4 (Appendix B.2) is a misunderstanding. In our experiment to discover new coformers for Nicorandil, our objective was to identify coformers with target properties and a high degree of similarity to the existing co-crystals of this drug. Given that the known coformers for Nicorandil are Fumaric and Suberic acid [3], it is logical that the table includes aliphatic carboxylic acids. This result was achieved as intended in the experiment.
5. During validation (Section 5.4), we identified three co-crystalline systems that were not included in the training dataset. By showcasing GEMCODE's ability to produce new co-crystal structures reported in the literature as experimentally validated, we prove its predictive capabilities. We believe that our approach to validating GEMCODE is reasonable and convincing. We share the view that wet lab experiments must not be required for NeurIPS application papers. Nevertheless, we currently expand our team to work on experimental validation of our predictions in the lab.
6. We are thankful to the reviewer for acknowledging that our work is _"one of the first efforts to utilize generative ML for designing co-formers"_ and addresses _"a very interesting and impactful problem in pharmaceutical production"._ We totally share this assessment. Given that the problem is novel, the applicability of previous approaches is very limited. Existing Guacamol test suites do not support co-crystal design tasks, that is why we did not include the Guacamol benchmarks. The drug design methods implemented in Guacamol (SMILES LSTM, Graph GA) [6] served as an inspiration for our own fine-tuned baselines (we use the GAN-LSTM). REINVENT4 (the latest version) is also not suitable for end-to-end co-crystal design tasks out-of-the-box. The models used in the RL optimizer would have to be retrained and the property prediction part would have to be significantly modified to make a comparison. Additionally, there are works claiming that RL methods are not efficient enough for crystal design [7]. A comprehensive comparison of such methods to ours would certainly be interesting but beyond the scope of this study.
We would like to also provide __answers to the questions__:
1. During the validation experiments, GEMCODE successfully generated three new cocrystal systems that were not present in the training data. These cocrystals were reported in the literature as experimentally confirmed. This perfectly demonstrates the generalization capability of our pipeline. Furthermore, in the field of cocrystals, the conventional approach does not typically involve data splits based on class distribution. A recent study accepted for the ICML 2024 workshop [8] employed a standard random split.
2. The reviewer's comment regarding the novelty assessment was particularly helpful, so we conducted further analysis and created histograms illustrating the distribution of the maximum Tanimoto Similarity (IT) between the generated coformers and the coformers from the training dataset (https://anonymous.4open.science/r/GEMCODE/rebuttal/histogramms/GAN.png, https://anonymous.4open.science/r/GEMCODE/rebuttal/histogramms/VAE.png, https://anonymous.4open.science/r/GEMCODE/rebuttal/histogramms/CVAE.png). We observed that the distribution is predominantly centered on IT values ranging from 0.5 to 0.6 across all generative models. This observation strongly supports the assertion that the generated molecules exhibit substantial novelty.
3. We thank the reviewer for highlighting the importance of adding refcodes to Table 3, as this will enhance the accessibility of the information for the reader. We will update the table in the camera-ready submission.
In conclusion, we would like to highlight that we are positioning GEMCODE as a global open-source project. We value constructive criticism and will use the reviewers’ feedback to improve our methods for data analysis, generative models, and validation of results.
As the reviewer recognized the originality and significance of our work, we kindly ask to consider increasing the score to 6. A NeurIPS publication could attract much attention and interest of the professional community, which is essential for advancing and promoting GEMCODE. With current scores at 8/6/4/4, a moderate increase in the score would significantly increase our chances to get accepted.
__References__ will be posted as a separate official comment due to the limit of rebuttal size.
---
Rebuttal Comment 1.1:
Comment: I very much appreciate the authors for the comprehensive rebuttal and the additional plots. Unfortunately, my key reservations have not been solved:
1. While the authors suggest that the influence of polymorphism on predicting mechanical properties is insignificant due to its low reporting percentage, I must respectfully disagree. Polymorphism is very much underreported because chemists often do not explore further once a crystal structure is identified (especially in non-pharmaceutical contexts). In crystal structure prediction literature, hundreds of local energy minima are almost always predicted for any given compound. Often the case, these predictions are later validated experimentally.
2. For the correlation between crystal structures' mechanical properties and tableatability, can the authors explain how the diversity of intermolecular interaction affects the prediction? Here, the authors only consider a few criteria, but I am very certain crystals with e.g. halogen bonds would exhibit no hydrogen bonds between planes (hence good score here) but in reality, act very similar to hydrogen bonds. The cited CrystEngComm paper analyzed only 30 crystals and indeed claimed 'While this tool was not intended to be used alone as a predictor of mechanical properties, it _seems to correlate well_ with mechanical properties'. For reasons here and the point above, I do think the claim of tableatability needs to be validated with wet lab results (and hence more suitable for another venue).
3. In Section 5.4, the authors highlight the generation of three new coformers, all of which are carboxylic acids. How is the diversity (in terms of Tanimoto similarity) between generated structures (other than novelty, and I appreciate the histograms)?
4. Maybe this is a misunderstanding, but if the models predict the scores given a pair of SMILES string, why can the authors not employ typical baselines such as Guacamol? Sure, the search space would have to be different and you cannot retrain your scoring function, but GraphGA can surely yield molecules with optimized scores.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer’s thoughtful comments on our paper.
Below, we provide additional considerations and experimental results to resolve the remaining issues:
1. We acknowledge that there were inaccuracies in our response. Undoubtedly, polymorphism of cocrystals is a significant issue. We will include statements emphasizing its importance for pharmaceuticals. However, since our approach is data-driven, and the data on polymorphism is limited, it was impossible to incorporate it into the predictive models in a meaningful way. We will update the Limitations section accordingly. Thanks to the reviewer, we now identify the influence of polymorphism as a prospective research direction.
2. We fully agree with the reviewer that, in general, there are a number of crystal parameters to consider. However, since we are looking for co-crystals in the field of pharmaceuticals, directional interactions other than hydrogen bonds (such as halogen bonds, chalcogen bonds, pnictogen bonds) are rare. Considering the full set of such interactions is beyond the scope of the study. Certainly, for better generalization of the approach, the set of descriptors for the interplanar criterion should be further improved, but this requires additional data (i.e., those co-crystals having halogen, chalcogen bonds, etc.), currently not available. Although GEMCODE is a stable solution for pharmaceutical co-crystals, the pipeline may not fit so well for other co-crystals, which we will discuss in the updated Limitations section.
3. In response, we plotted distributions of Tanimoto similarity between the molecules generated by the GAN (https://anonymous.4open.science/r/GEMCODE/rebuttal/self_histograms/GAN2.png), and the transformer-based VAE (https://anonymous.4open.science/r/GEMCODE/rebuttal/self_histograms/VAE2.png) and CVAE (https://anonymous.4open.science/r/GEMCODE/rebuttal/self_histograms/CVAE2.png). For each model, the average Tanimoto similarity is between 0.70 and 0.75. On one hand, this highlights sufficient diversity of the molecules. On the other hand, relatively high average similarity was expected, since all generated coformers relate to the same drug. The latter fact aligns well with the observation that the distribution of CVAE is shifted towards 1 due to the “condition” block of the architecture enforcing the target properties of coformers specific to the drug.
We thank the reviewer for suggesting this analysis. We find it very useful, as it supports our key findings but provides additional insights. We acknowledge a moderate fraction of “duplicates” with Tanimoto similarity = 1 among the predicted molecules. Dropping such molecules brings down the percent of target coformers of Table 1 to 2.23%, 1.68% and 5.63% for GAN, T-VAE and T-CVAE, respectively. Please note that these adjustments do not qualitatively change our results and conclusions. We will update the corresponding sections of the paper accordingly.
4. We agree that baselines from Guacamol (https://github.com/BenevolentAI/guacamol_baselines) can be used to optimize any molecule in SMILES notation with a given objective. However, unlike the Guacamol tasks, the co-crystal design task is multi-objective. The algorithms from Guacamol_baselines (e.g., the noted GraphGA) are focused on single-objective tasks. There are papers where existing Guacamol tasks are used in the mutli-objective formulation, but they require different optimization techniques [1].
Nevertheless, __we developed a multi-objective modification of GraphGA__ (with Pareto dominance based fitness; the implementation is available at https://anonymous.4open.science/r/GEMCODE/GraphGA_baseline/graphga.py), suitable for co-crystal design tasks. In the __new experiments__, we started from a random subset of co-crystals (with the same population size and number of iterations as was used in GEMOL). GraphGA demonstrated inferior results to GEMOL in terms of target mechanical properties (see plots at https://anonymous.4open.science/r/GEMCODE/GraphGA_baseline/GEMOL_vs_GraphGA.png). More specifically, the highest average probability over all runs was 0.94 (GraphGA) vs 0.95 (GEMOL) for unobstructed planes, 0.63 vs 0.72 for orthogonal planes and 0.12 vs 0.18 for h-bonds bridging. In addition, the convergence of GraphGA was unstable. Ultimately, the hybrid approach of GEMCODE resulted in 21.3% of target molecules on average, while GraphGA produced 20.5% of target molecules of lower quality in terms of predicted mechanical properties. Therefore, we conclude that GEMCODE clearly outperforms the multi-objective version of GraphGA. We will add an appendix section to the camera-ready submission to highlight this comparison. Also, we will further investigate options to extend the set of baselines.
In light of all this, we kindly ask the reviewer to update the initial review.
[1] Optimized drug design using multiobjective evolutionary algorithms with SELFIES //arXiv preprint arXiv:2405.00401. - 2024.
---
Rebuttal 2:
Title: References used in the rebuttal
Comment: __References:__
[1] Heng, T., Yang, D., Wang, R., Zhang, L., Lu, Y., & Du, G. (2021). Progress in research on Artificial Intelligence applied to polymorphism and cocrystal prediction. ACS omega, 6(24), 15543-15550.
[2] Bryant, M. J., Maloney, A. G. P., & Sykes, R. A. (2018). Predicting mechanical properties of crystalline materials through topological analysis. CrystEngComm, 20(19), 2698-2704.
[3] Mannava, M. C., Gunnam, A., Lodagekar, A., Shastri, N. R., Nangia, A. K., & Solomon, K. A. (2021). Enhanced solubility, permeability, and tabletability of nicorandil by salt and cocrystal formation. CrystEngComm, 23(1), 227-237.
[4] Tripp, Austin, and José Miguel Hernández-Lobato. "Genetic algorithms are strong baselines for molecule generation." arXiv preprint arXiv:2310.09267 (2023).
[5] Ye Z. H. et al. Searching new cocrystal structures of CL-20 and HMX via evolutionary algorithm and machine learning potential //Journal of Materials Informatics. – 2024. – Т. 4. – №. 2.
[6] Brown N. et al. GuacaMol: benchmarking models for de novo molecular design //Journal of chemical information and modeling. – 2019. – Т. 59. – №. 3. – С. 1096-1108.
[7] Thomas M. et al. Augmented Hill-Climb increases reinforcement learning efficiency for language-based de novo molecule generation //Journal of cheminformatics. – 2022. – Т. 14. – №. 1. – С. 68.
[8] Birolo, R., Özçelik, R., Aramini, A., Gobetto, R., Chierotti, M. R., & Grisoni, F. (2024). Deep Supramolecular Language Processing for Co-crystal Prediction. (https://openreview.net/forum?id=bQ9d2hzjW4¬eId=9SoErgR0kb) | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Achievable distributional robustness when the robust risk is only partially identified | Accept (poster) | Summary: This paper proposed a general framework of partially identifiable robustness to evaluate robustess in scenarios where the training distributions are not heterogeneous enough to identify the robust risk. They define 'the identifiable robust risk' and its correspondig minimax quantity. They show previous approaches achieve suboptimal robustness in this scenario. Finally, they propose the empirical minimizer of the identifiable robust risk and show that it outperforms existing methods in finite-sample experiments.
Strengths: The paper is crealy written. The idea of establishing partially identifiability to fill the gap between an "all-or-nothing" view on robustness in the SCM framework is interesting.
Weaknesses: I am not an expert in this field so I cannot point out many weaknesses. Maybe one thing is the lack of motivation from real-world application, such as in what applications or when the proposed framework is usefull in practice?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. On line 135-137, the $M_{\text{test}}$ is defined as $M_{\text{test}}=\gamma\Pi_{M}$. What's the reasoning behind this choice?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer appreciated the main idea of our paper. In the following, we hope to clarify the real-world motivation for our study and some theoretical assumptions.
**On real-world applications** The main motivation for our paper was expanding the existing assumptions in distribution shift literature to allow for a statement in a more realistic setting, particularly in safety-critical applications. We wanted to account for the fact that distribution shifts are mostly bounded in the real world and occur in “realistic” directions. Contrary to many existing results, in the real world, we rarely have access to enough training environments to achieve point identification of the “best” robust predictor. Instead, we aim to study the “best” model if only a few training environments are given, but the distribution shift (in potentially new directions) is not too large. In the following, we provide a **toy example**, motivated by medical applications, that illustrates our setting.
Suppose we are conducting a long-term medical study, where data is collected from the same group of patients over the years to predict a health parameter $Y$, e.g., cholesterol level in the blood, from a group of covariates $X = (X_1, …, X_n)$ such as age, blood pressure, physical activity, resting pulse, BMI, etc. We are given data $(X^e, Y^e)$ for $e \in E_{train}$ from multiple past studies, where $\mathcal{E}_{train} = \{2010, 2015\}$ are the years in which the studies were conducted. We assume that the data $(X, Y)$ are generated by an underlying causal model in which not every variable is observed (confounded setting). In this example, we can identify the causal effect of the covariate $X_1$, age, on the cholesterol level since age has a mean shift of 5 years across the studies. Suppose, however, that the distribution of $X_2$, physical activity, remains relatively stable across the studies. The causal effect of $X_2$ on $Y$ is then not identifiable. We now want to train a model that generalizes best on the data $X^{2020}$ collected in $2020$. The age variable shifts again by 5 years; however, the physical activity variable now also shifts a bit (e.g., due to COVID-19), which is a shift we have not observed previously. In this situation, both the causal parameter and the robust predictor are only partially identifiable.
**Assumption on $M_{test}$**: Often, practitioners will have some information on the strength and direction of the distribution shift (for instance, we might know that only the resting pulse will shift across studies, and at most by 20). This knowledge is what we tried to formalize in our assumption on $M_{test}$ (lines 134-137). The parameter gamma corresponds to the maximum strength of the shift, whereas the subspace $M$ corresponds to the expected direction of the shift (if one doesn’t know, one can take $M$ to be the whole covariate space). Equations (3) and (4) bound the second moment of the test distribution shift by $\gamma \Pi_{M}$, effectively bounding the mean and variance of the shift, as well as constraining it geometrically to the subspace $M$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. It addressed my questions. | Summary: This paper investigates the optimal minimax risk of a robust predictor when the robustness set is partially observable, under a structural causal model with hidden confounders. By decomposing the test covariance matrix of latent parameters into a component spanned by the training distributions and the orthogonal component, it is shown that the robust predictor is only identifiable if the test shifts are in the direction of the training shifts. While for unobserved test shifts, the best achievable minimax risk grows linearly with shift scale. The theory is applied to show sub-optimality of OLS and anchor regression with partially observable test shifts.
Strengths: I am not familiar with the field of causal inference, but it appears from the paper that it has made contributions as the first result for distributional robustness or invariant causal prediction when the robustness set is partially identifiable. Particularly, it shows that infinite robustness is impossible under this scenario, and finite robustness methods can show performances reduced to ERM. This echoes empirical evidence and provides a possible theoretical explanation for reported failure of distributional robustness methods under wild environments.
Weaknesses: The structural causal model in Eq 2 and the resulting explicit solution in Eq 6 shows that the model is biased even without distribution shifts. To see this, take $\gamma =0$ in Eq 6 and the predictor does not reduce to $\beta^\star$. The model is unbiased only when the cross covariance between $\eta$ and $\xi$ vanishes, which implies no hidden confounder and only covariate shift. In contrast, the classic approach for invariant causal prediction [1] produces unbiased estimation beyond covariate shift.
[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Typo: L137, the sub-space.
2. In Fig 1, bidirectional edges don't make sense for causal graphs, and it's not explained either. Moreover, is there exact equivalence between Fig 1 and the SCM in Eq 2?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has addressed limitations with regarding to the model, e.g., linear structure and additive noises.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating the contribution of the paper and precisely summarizing its main idea.
**Bias of the structural causal model** The reviewer is correct that the causal effect estimate would be biased when the cross-correlation between the noises $\xi$ and $\eta$ is not 0. We would now like to argue that this is a feature rather than a bug. First, we would like to clarify that this paper considers the prediction performance, or MSE, during test time. Therefore, having a biased causal effect estimate does not necessarily affect the test performance, which is still optimal for $\gamma=0$ (no shift). Furthermore, allowing for latent confounding (expressed via cross-correlation of $\eta$ and $\xi$) only makes our model more general and the problem much more challenging to solve. The presence of confounding creates a tradeoff between predictive power and robustness (see, e.g., [2]).
Furthermore, we would like to highlight that even IRM might not always produce unbiased estimates of the causal coefficient. For example, in [1], the synthetic data experiment shows that IRM and OLS have a bias of similar magnitude in a confounded setting with homoskedastic noise.
**Q2 (bidirectional edges)** Normally, a bidirectional edge $X \leftrightarrow Y$ between two nodes indicates latent confounding. In Figure 1, we explicitly list the latent confounder $H$, resulting in the notation $X \leftrightarrow H \leftrightarrow Y$, to express that the latent confounder can be both an ancestor and descendant of $X$ and/or $Y$. More precisely, we allow for the following scenarios: $X \rightarrow H \rightarrow Y$, $X \leftarrow H \rightarrow Y$, $X \rightarrow H \leftarrow Y$ (the fourth one is excluded due to the acyclicity constraint).
**Q2 (equivalence of Figure 1 and the SCM)** To see the equivalence between the SCM and DAG, it is helpful to consider the three configurations discussed before. When $X \leftarrow H \rightarrow Y$, the causal coefficient $\beta^\star$ equals the weight of the path $X \rightarrow Y$. In this setting, $\eta$ and $\xi$ are correlated via $H$. When $X \rightarrow H \rightarrow Y$, the causal coefficient $\beta^\star$ equals the sum of the weights along the paths $X \rightarrow Y$ and $X \rightarrow H \rightarrow Y$. In this setting, $\eta$ and $\xi$ are independent. When $X \rightarrow H \leftarrow Y$, the causal coefficient $\beta^\star$ equals the weight of the path $X \rightarrow Y$. In this setting, $\eta$ and $\xi$ are independent.
[1]: Arjovsky M, Bottou L, Gulrajani I, Lopez-Paz D. Invariant risk minimization. arXiv preprint arXiv:1907.02893. 2019 Jul 5.
[2]: Rothenhäusler D, Meinshausen N, Bühlmann P, Peters J. Anchor regression: Heterogeneous data meet causality. Journal of the Royal Statistical Society Series B: Statistical Methodology. 2021 Apr;83(2):215-46.
---
Rebuttal Comment 1.1:
Comment: I acknowledge and thank the author for the response. In my review, my concern over the usefulness of the structural causal model is addressed. I recognize that the non-existence of an unbiased causal effect estimator for infinite robustness is expected where bounded distribution shift is considered, which is also part of the contribution. Therefore, I raise my score from 6 (weak accept) to 7 (accept). | Summary: This paper proposes a new framework for distributionally robustness under the linear causal setting. Specifically, the authors minimize the so-called identifiable robust risk, which corresponds to the maximum of the robust risk for parameters in the observationally equivalent set. Under such partially identifiable robustness framework, they discuss the lower bound of the risk and show that some estimated identifiable robust predictor can achieve that lower bound risk, where the corresponding empirical version can approximate the lower bound risk well. They also validate the better performance with finite sample numerical experiments.
Strengths: - The motivating illustration and results are very clear;
- The direction the authors study is interesting in bridging the structure and non-structure of distributional robustness.
Weaknesses: - Some paragraph in the training and testing data part can be reorganized as some formal assumptions, such as the assumption of M_{test} and linear assumption as well as the discussion in Section 3.2 in terms of the definition of S and M. This gives a clear picture on what some key conditions (and possible relaxations) are in this paper.
- There are a few missing details (see Questions as follows), which I hope the authors can explain them a bit more.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the authors highlight a bit more on the (approximated) real-world example when the predictor is partially identifiable? Since it may be a bit unfamiliar to the general DRO audience for these notations and their relevance in practice.
- Can the authors elaborate more on how the empirical estimations for the space of the training shift direction $\hat S$ and $\hat R$ are computed, as compared with the current version in Appendix D? From my preliminary understanding, this space determination is important for subsequent estimation.
- The potential utility of **active intervention selection** is also interesting, and it seems aligned with some recent relevant work on distribution shifts. For example, in causal explanation [1] and incorporating specific features in improving distributional robustness [2]. I am wondering if the authors can provide more details on how their own intervention is and the connections with these existing literature, which can be potentially incorporated in the main body and appendix.
[1] Quintas-Martinez, Victor, et al. "Multiply-Robust Causal Change Attribution." arXiv preprint arXiv:2404.08839 (2024).
[2] Liu, Jiashuo, et al. "On the Need for a Modeling Language Describing Distribution Shifts: Illustrations on Tabular Datasets." arXiv preprint arXiv:2307.05284 (2023).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the main limitation of this paper, which I think is reasonable compared with existing literature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out the novelty of the direction of our study. We are happy to fill in on the missing details below:
**Q1 (real-world example of partial identifiability)** Indeed we can give a toy example that illustrates our abstract setting in Section 3.1: suppose that we are conducting a long-term medical study, where data is collected from the same group of patients over the years to predict a health parameter $Y$, e.g., cholesterol level in blood, from a group of covariates $X = (X_1, …, X_n)$ such as age, blood pressure, physical activity, resting pulse, BMI etc. We assume that the data $(X, Y)$ are generated by an underlying causal model, in which not every variable is observed (i.e. there might exist latent confounding). During training time, we are given data $(X^e, Y^e)$ for $e \in E_{train}$ from multiple past studies, where $E_{train} = \{2010, 2015\}$ are the years in which the studies were conducted. We now want to train a model which generalizes best on data $X^{2020}$ collected in 2020.
In such a setting, we can expect that the distribution shift between the data in 2010, 2015 and 2020 is not entirely arbitrary: for example, the covariates such as age ($X_1$) might shift similarly between 2010 to 2015 and 2015 to 2020, whereas other covariates such as physical activity ($X_2$), BMI ($X_3$) or resting pulse ($X_4$) may exhibit unseen, but bounded shifts (e.g. due to a disease outbreak such as Covid). This is an example of a structured distribution shift. We observe that we can identify the causal parameter $\beta^\star$ in the direction of $X_1$, since there is a shift of age across the training environments. However, the causal parameter is not identifiable for $X_2$ if the distribution of physical activity has not shifted in previous studies. This renders the causal parameter, and (since the test data shifts in the direction of $X_2$) the robust risk of the problem, partially identifiable. In this setting, only $X_1$ is guaranteed to give an invariant prediction, but an estimator which just uses the age might not have enough predictive power. Instead, we suggest to also utilize the spurious correlations between $X_2,...,X_4$ and $Y$ to predict the target variable, but penalize the predictor in their directions based on the strength of the expected test shift (e.g., we do not expect the average resting pulse to shift by more than 10).
**Q2 (empirical estimation of S and R)** Indeed, empirical estimation of the subspaces $S$ and $R$ is an important part of the practical application of our method. We will add to the manuscript that the space $S$ can be computed from the second moments of the observed distribution via $S := \text{span} \cup_{e \in E_{train}} [(\Sigma_e - \Sigma_0) + (\mu_e - \mu_0)(\mu_e - \mu_0)^\top]$, and can be estimated via empirical means and covariances of the training distributions. Results on the consistency of estimation of the eigenvectors when the dimension is fixed and eigenvalues are finite are given, for example, by [1]. For the test shift, there’s either prior information on their direction available via a subspace $M$, which we can incorporate by setting $R = \Pi_{S^{\perp}} M$, or, if not, we may take the conservative estimate of $R$ being the orthogonal complement of $S$. The downside of this choice is that the robustness requirements might be more restrictive than necessary.
**Q3a (active intervention selection)** Thanks a lot for this comment. Although we plan to have a longer elaboration of this topic in the full manuscript, we briefly discuss here how our approach might be utilized for active intervention selection. In Equation (31), the identifiable robust risk is given by a supremum over the possible true causal parameters from the observationally equivalent set. The argmax of the supremum (which depends on the estimator) utilizes partial identifiability to find the “most adversarial” direction for the test shift, along which the given estimator will suffer the highest test risk. One can actively sample the next dataset by performing an intervention along this direction, which will maximally decrease the identifiable robust risk (31) of a given estimator (e.g., OLS). This procedure can then be repeated.
**Q3b connections to listed literature** In [2] (causal change attribution), the main difference seems to be that the causal graph is known and the objective is, instead of robust prediction, causal change attribution. A conceptual similarity seems to be that [2] allows for recovery of the target parameter under partial misspecification. However, in our framework this would mean that although some components of the model might be misspecified (or, in our words, underidentified), the target parameter is still identifiable – thus, we would call the setting in [2] identifiable under the listed assumptions, similarly to the case of anchor regression. [3] is fairly related to our work motivation-wise: our model allows for $Y|X$-shifts in addition to $X$-shifts, consistent with the observation that $Y|X$-shifts frequently occur in tabular data and cannot be neglected.
[1]: Anderson TW. Asymptotic theory for principal component analysis. The Annals of Mathematical Statistics. 1963 Mar;34(1):122-48.
[2]: Quintas-Martinez V, Bahadori MT, Santiago E, Mu J, Janzing D, Heckerman D. Multiply-Robust Causal Change Attribution. arXiv preprint arXiv:2404.08839. 2024 Apr 12.
[3]: Liu J, Wang T, Cui P, Namkoong H. On the need for a language describing distribution shifts: Illustrations on tabular datasets. Advances in Neural Information Processing Systems. 2024 Feb 13;36.
---
Rebuttal Comment 1.1:
Comment: I acknowledge and thank the authors for their detailed response. Therefore, I keep my evaluation towards the paper.
Specificially, I am quite interested in the active intervention part authors mention and agree with what the authors say in the rebuttal with respect to its connection with the active learning side. Therefore, I am eager to see their revised version. Btw, for [3] (i.e. https://arxiv.org/pdf/2307.05284), I am pointing out their intervention part (i.e., Section 5 in their latest arXiv version instead of the conference version), which also discusses some potential feature-based interventions and can be of separate utility to the authors. | Summary: The paper studies a linear Structural Causal Model (SCM) for prediction under distribution shifts due to an an additive term to covariates that changes at test time, and an unobserved confounder between the covariates and the label. The key difference between the proposed analysis and those presented in other works (e.g. those in anchor regression, invariant causal prediction etc.) is that the shift is bounded in its strength. Another difference is that the optimal robust predictor w.r.t the entire uncertainty set is not identifiable. Hence, instead of the optimal robust error, the paper studies the optimal error that can be achieved from observable data. This corresponds to the robust error over an uncertainty set that is generated by all SCM parameters that could have produced the training data, which is in general a larger set than the uncertainty set we are interested in (i.e. of bounded additive shifts to the covariates).
Once the problems is set up, a parametric form for the set of parameters that can generate the training data is derived, along with the corresponding set of possible robust predictors. Then under a mild assumption on boundedness of the ground truth regression parameters, the following are derived: 1) a closed form is derived for the robust loss in the case where test distributions only induce shifts in directions that have been observed at training. The loss grows linearly with the strength of the shift; 2) a lower bound on the achievable risk from observed data, which is tight (and thus can be learned from observed data) for large enough shifts.
Further analysis and simulations with Gaussian data are done to compare the risk of the derived estimator with that of ERM and anchor regression (which does not exploit the boundedness of shifts to achieve better prediction). The results verify that the risk obtained by the proposed empirical estimator are close the lower bound given in the theorem, and outperform the two baselines as shifts grow large.
Strengths: * Overall I enjoyed reading the paper, it is clear and written with care. Generally, I also like the direction of formalizing bounded shifts and unidentifiable settings in detail. Finally, the analysis is well performed and easy to follow.
* The work is original in formally analyzing bounded distribution shifts, where even in population the optimal robust predictor might still have non-zero projection on directions that shift at test time. \
Small note on this: I think that for finite sample guarantees boundedness assumptions on the strength of the shift must be made, and they are made in works that give sample complexity results. I believe the reason is that with unbounded shifts, any small weight on a shifting feature can be magnified unboundedly to yield a large robust error. With finite samples, it is usually impossible to guarantee strictly zero weights on the shifting directions. Therefore it might be worthwhile clarifying that considering the population loss is an important component for the analysis.
* Beyond the points mentioned above, the bounds on the robust risk, closed form solutions, and the formalism used in the work, can be useful for future theoretical analyses. In terms of practical significance, the approach derived from these results to explicitly account for bounded shifts might be useful in the future, if it is generalized beyond linear models.
Weaknesses: * As mentioned above and the authors mention in describing the limitations of their work, it is restricted to linear models. Intuitively, a considerable challenge in learning robust models is to identify the ``directions” that shift between domain. Under linear models it is rather straightforward, and the method proposed in the paper depends on linearity of the model in order to find these directions. This is unlike some other methods in the domain generalization literature, where certain formal results are provided on linear models but the methods can easily be tested in the non-linear case.
* Even within the realm of linear models, I am not throughly convinced that the method is useful in real world problems. To make this more convincing, it might have been nice to run simulations over non-synthetic data and some more variants of methods. E.g. using several methods that learn invariant models when hyper-parameter tuning is performed with the objective proposed in this work may be of interest. That is to see whether boundedness of the shift should be taken into account during training, or is it enough to simply use it for model selection. Yet the most significant drawback is still of course, the limitation to linear models.
Some other/smaller comments:
* In line 60 prior work is cited to claim "that even minor violations of the identifiability assumptions can cause invariance based methods to perform equally or worse than ERM". However, I am not sure that the failures portrayed in these works are strictly due to identifiability violations. At least not the ones alluded to in this paper, which is the lack of heterogeneity in the training environments. In Kamath et al. 21, the predictor is identifiable but the problem is specifically in the IRMv1 objective, see also [1] who point out an objective that solves this issue. In Rosenfeld et al. 20, only one failure is due to not having enough environments (and arguably, this is since they consider the uncertainty set that includes shifts in all directions. I will also touch on this in the next minor comment), and the second one is due to non-linearity.
* Regarding analysis under lack of heterogeneity or unidentifiable optimal robust predictor: I think that the part of the analysis that touches upon the insufficient heterogeneity of the environments in order to identify the robustness set/robust predictor can be slightly reframed. If I am not mistaken, even in works that give results about identifiable robustness sets (e.g. when the number of environments is linear in the number of shifting features, or stronger ones like [2]), it is most likely possible to draw guarantees about robust risks, but only with respect to a smaller uncertainty set which is restricted to dimensions that shifted between the training environments. That is since the methods still enforce constraints which restrict weights in certain directions. While it is true that most of these prior works did not consider unidentifiable uncertainty sets, it might be related to choice of presentation, and not strictly because the methods are technically limited in that sense. Further, in lines 168-169 it is claimed that “prior work only considers scenarios where the robustness set and hence also the robust prediction model are still identifiable”. I am not sure this is entirely true. There are different formalisms that were considered for spurious correlations, such as those in [3]. In their setting, when the association is not what they call “purely spurious”, then the robust predictor cannot necessarily be recovered. Also [4] study a similar setting where due to similar reasons no guarantee can be given on identifying the robust predictor, but only a lower bound on the error is given.
* It might be worth mentioning in lines 112-113 that in case $H$ in figure 1 is a selection variable (I assume it can be since both edges are bi-directional) then the shift reduces to covariate shift, if I am not mistaken.
* Personally, I found the notation in the introduction which uses $\mathrm{shift}\in{\mathrm{shift set}}$ etc. somewhat redundant and confusing. The more formal notation in section 2 onwards was much easier to understand, so in my opinion it might be worth considering to go straight ahead into that notation.
[1] Wald Y, Feder A, Greenfeld D, Shalit U. On calibration and out-of-domain generalization. Advances in neural information processing systems. 2021 Dec 6;34:2215-27.
[2] Chen Y, Rosenfeld E, Sellke M, Ma T, Risteski A. Iterative feature matching: Toward provable domain generalization with logarithmic environments. Advances in Neural Information Processing Systems. 2022 Dec 6;35:1725-36.
[3] Veitch V, D'Amour A, Yadlowsky S, Eisenstein J. Counterfactual invariance to spurious correlations in text classification. Advances in neural information processing systems. 2021 Dec 6;34:16196-208.
[4] Puli AM, Zhang LH, Oermann EK, Ranganath R. Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations. In International Conference on Learning Representations.
Technical Quality: 4
Clarity: 3
Questions for Authors: * It could’ve been nice to derive upper bounds on the robust loss in addition to the lower bounds proved in the theorem. That is, assuming these bounds can be calculated from observable data, I’d imagine that most practitioners would be more interested in an upper bound as it gives a strong guarantee on the risk worst possible risk. Do you think this is a reasonable goal, and do you perhaps have any insights on this?
* Could you perhaps discuss the differences between the empirical optimization problem you derived in the paper and those derived in prior work? For instance, looking at eq. 19 in appendix D, it seems like there is a term that penalizes $\|\| \hat{S}^\top(\hat{\beta}^{\mathcal{S}} - \hat{\beta}) \|\|$ which I read as the weights in the “invariant directions” should be the same across the environments. This seems quite similar to ICP, IRM etc. Then the other term limits the projection on the shifting directions, but it might be useful for readers to have a short discussion on the conceptual similarities between this and penalized versions of other methods. Also, there might be a typo in that equation, $\mathrm{arg}\min_{\beta\in{\mathbb{R}^d}}$ should be $\mathrm{arg}\min_{\hat{\beta}\in{\mathbb{R}^d}}$ instead.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating the overall direction and clarity of our paper, as well as originality of our work. We are grateful for the constructive comments and will incorporate the minor points in the revised version of the paper. We now respond to some of the major points and questions.
**Linearity of the model** We acknowledge that understanding which shift directions are “learned” and which are “unobserved” during training time for more general, nonlinear models is an important direction for future work. We further recognize the linearity of our setting as a limitation. We would like to clarify that the main contribution of our paper is not primarily to propose a new method for domain generalization (that can then also be tested in non-linear settings), but to introduce and make a first step towards quantifying the limits of robustness in a partially identifiable setting. On this note, we expect that the results and intuition developed in this paper for the linear case can be utilized beyond linear models, since realistic distribution shifts can be often reduced to linear shifts in a lower-dimensional latent space via a suitable parametric or non-linear map [5, 6].
**Evaluation of the method in real-world settings** Even though the method is not necessarily the focus of our paper, we agree that our work could benefit from a more thorough evaluation on real-world data. To this end, in the attached PDF, we present preliminary results of experiments on real-world single-cell gene expression data [7]. The dataset consists of single-cell observations of over 622 genes from observational and several interventional environments, arising through a knockdown of each unique gene. We pre-select always-active genes in the observational setting, resulting in a smaller dataset of 28 genes. We measure the performance of our method, the identifiable robust predictor (Rob-ID), compared to other algorithms [10, 11, 12], as follows. We select each gene once as the target variable $Y$ and select the three genes most strongly correlated with $Y$ (using Lasso), resulting in a smaller dataset.
Given this dataset, we test on all combinations of training (observational + one shift) and test environments (other shift).
We train the algorithms on the training environments and evaluate them on the test environments using mean square error (MSE). The results in Figures 2(a) and 2(b) indicate that Rob-ID outperforms existing distributional robustness methods, particularly for larger shifts in new directions.
**Failure cases of IRM as motivation for our work** We agree that while the cited papers include non-identifiability as failure cases (Kamath et al. ‘21 Section 4 and Rosenfeld et al. ‘20), they also discuss other possible reasons for failure. We can clarify this in the revised version by adding that multiple failure cases have been discussed in the literature (citing these papers), one of which is non-identifiability, the focus of this work.
**Application of existing work for partially identifiable case** We agree that prior methods can be applied and evaluated in the non-identifiable setting even though prior analysis so far has focused on the case where the robust risk was identifiable (including your described case when the ”uncertainty set which is restricted to dimensions that shifted between the training environments”). In fact, one of the goals of our paper was exactly to evaluate prior methods in this new setting.
**Existing formalisms for spurious correlations**
We thank the reviewer for pointing us to relevant and interesting works [3] and [4]. Although these works describe cases where the optimal robust predictor cannot be recovered, they seem to be binary negative statements in that they do not provide a quantification of this failure nor a proposed method for robustness in this failure case.
**Q1 (upper bounds)** In our result, for $\gamma > \gamma_{th}$, the value of the inf sup is exact, thus, in Theorem 3.1, Case (b) line 2 it is both a lower and an upper bound (we will clarify this in the main text). For small $\gamma$, the loss of the anchor regression estimator is a tight upper bound – which for the practitioner means that if the shifts in unknown directions are expected to be very small, the anchor estimator is optimal.
**Q2 (connections to invariance objectives and regularizers)** Thanks for the suggestion, we will include a more intuitive interpretation of the individual terms in the revised manuscript. In particular, we will discuss the population objective (13) that corresponds to the empirical objective in (19) when we replace $S$, $\beta^S$, $R$ by their empirical estimates. The objective consists of three parts: 1) loss on the reference environment (without any distribution shift), 2) the term $||S^\top (\beta^S - \beta)||$ that “aligns” $\beta$ in the directions of the true causal parameter $\beta^\star$ projected on S, and 3) the term $(C_{ker} + || R^\top \beta ||_2)^2$ that shrinks $\beta$ in the directions unseen during training. Since $\beta^\star$ is the optimal invariant predictor (although unknown), the second term can be interpreted as aligning the estimator towards the true invariant predictor along the observed shifts (and hence corresponds to inducing invariance across training environments, spiritually corresponding to the invariant literature as the reviewer pointed out).
For citations, please refer to the general rebuttal out of space reasons.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the comments and questions, I also appreciate the additional empirical results.
I see the point in most of the replies, perhaps one small comment is that [3, 4] indeed provide some binary negative statements, but [4] also give a form of a guarantee (an estimator that has better-than-random worst case performance, and is optimal in a restricted manner). I agree that it is a different flavor of guarantee and analysis than the one given in this paper. Hence my comment is just meant to suggest a slight adjustment to the framing of the contribution, not to devalue it in any way.
I have raised my score following the in-depth response. | Rebuttal 1:
Rebuttal: We express sincere gratitude to all reviewers for their detailed reviews. It is encouraging to hear that the reviewers appreciate the novelty and overall direction of our work, especially its attempt to formalize partially identifiable robustness.
We also appreciate constructive feedback and we have carefully addressed and answered all points raised by the reviewers in the revised manuscript. For convenience, we summarize some important points that were mentioned by several reviewers and elaborate on some of them in more detail in the individual rebuttals.
- **Evaluation of the method on real-world data**: In the answer to Reviewer PdW8, we expand the empirical evaluation of our method to real-world single-cell gene expression data [7] – please see the attached PDF for preliminary results.
- **Meaning of “partial identifiability”**: It seemed that there might have been some remaining confusion about the way we use the term “partial identifiability”, also in the context of previous work. We would like to stress that in this paper, we focus on partial identifiability of the *robust risk* that arises through the non-identifiability of the causal/model parameters. In particular, we distinguish between partial identifiability of the robust risk (not considered in prior work) and partial identifiability of the causal parameter (considered in prior work, e.g. [10]). For example, existing literature [10] often considers cases where, although the model parameters are not fully identifiable, the robust predictor can still be computed. In the language in our paper, we call such a setting fully instead of partially identifiable, since the robust risk and its minimizer can be computed from data, as opposed to our case, where neither the model nor the robust risk can be identified.
- **Relation to invariance-based literature**: In the answer to Reviewer PdW8, we explain how existing literature (e.g., [14,15]) on the failure of invariance-based methods (such as Invariant Risk Minimization) can be related to lack of identifiability and thus motivates our study.
- **Real-world examples/motivation**: In the answers to Reviewers iX3g and 2y9v, we describe a motivating toy example based on medical data where the robust predictor is partially identifiable.
Here we add the references for this general response and the rebuttal to Reviewer PdW8:
[1] Wald Y, Feder A, Greenfeld D, Shalit U. On calibration and out-of-domain generalization. Neurips 2021
[2] Chen Y, Rosenfeld E, Sellke M, Ma T, Risteski A. Iterative feature matching: Toward provable domain generalization with logarithmic environments. Neurips 2022
[3] Veitch V, D'Amour A, Yadlowsky S, Eisenstein J. Counterfactual invariance to spurious correlations in text classification. NeurIPS 2021
[4] Puli AM, Zhang LH, Oermann EK, Ranganath R. Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations. In ICLR 2021
[5]: Thams N, Oberst M, Sontag D. Evaluating robustness to dataset shift via parametric robustness sets. NeurIPS. 2022
[6]: Buchholz S, Rajendran G, Rosenfeld E, Aragam B, Schölkopf B, Ravikumar P. Learning linear causal representations from interventions under general nonlinear mixing. NeurIPS 2023
[7]: Chevalley M, Roohani Y, Mehrjou A, Leskovec J, Schwab P. Causalbench: A large-scale benchmark for network inference from single-cell perturbation data. arXiv preprint 2022
[9]: Schultheiss C, Bühlmann P. Assessing the overall and partial causal well-specification of nonlinear additive noise models. JMLR 2024
[10]: Rothenhäusler D, Meinshausen N, Bühlmann P, Peters J. Anchor regression: Heterogeneous data meet causality. JRSSB. 2021
[11]: Shen X, Bühlmann P, Taeb A. Causality-oriented robustness: exploiting general additive interventions. arXiv preprint 2023
[12]: Peters J, Bühlmann P, Meinshausen N. Causal inference by using invariant prediction: identification and confidence intervals. JRSSB 2023
[13]: Arjovsky M, Bottou L, Gulrajani I, Lopez-Paz D. Invariant risk minimization. arXiv preprint 2019
[14]: Rosenfeld E, Ravikumar P, Risteski A. The risks of invariant risk minimization. arXiv preprint 2020
[15]: Kamath P, Tangella A, Sutherland D, Srebro N. Does invariant risk minimization capture invariance?. AISTATS 2021.
Pdf: /pdf/e7215efae0c5624e7d981756bfae348185160027.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Continuous Temporal Domain Generalization | Accept (poster) | Summary: The paper introduces the problem of Continuous Temporal Domain Generalization. It extends the Temporal Domain Generalization problem, that aims at developing models under temporally varying data, to handle data collected at arbitrary and continuous time points. The paper proposes a framework based on Koopman theory for this problem. It models the underlying dynamics and proposes an optimization strategy. The paper includes several classification experiments to demonstrate the effectiveness of the proposed approach.
Strengths: * The paper introduces a new problem continuous temporal domain generalization where data is continuous and irregularly observed. This is relevant and challenging setting.
* The paper is supported by proofs and the relevant assumptions under which the model is well-defined are clearly stated.
* The paper includes extensive experiments on various datasets, both synthetic and real-world, to demonstrate the effectiveness of the proposed method. The proposed model improves over existing methods.
Weaknesses: * the paper could discuss more the limitations of the proposed approach, in particular Assumption 1 that states the conditional probability distributions follow an ODE. While reasonable to make, this assumption is quite restrictive; I understand that the considered problem is challenging so relevant assumptions should be made. However, the paper would need to discuss the validity of this assumption. Also, another limitation can come from abrupt changes throughout time that might not be well captured by a continuous ODE formulation (mentioned in A.1.7.); this assumption is likely to be violated in real-world settings.
* the paper does not seem to follow the checklist; there is no justification for each points and if I am not mistaken, some "Yes" answer are not correct e.g. a Broader Impact section is missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: * How does it compare to domain invariant approaches (e.g. IRM, V-REx etc.)? A domain-invariant approaches might be better at handling abrupt changes.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * see limitations points in "Weaknesses".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing and acknowledging our work! We see your main concerns are about assumption and evaluations. Please read our answers along with the Rebuttal PDF.
> W1. discuss more the limitations; Assumption 1 that states the conditional probability distributions follow an ODE, need to discuss the validity. abrupt changes throughout time that might not be well captured by a continuous ODE formulation; this assumption is likely to be violated in real-world settings.
1. Assumption 1 does not state that the conditional probability distributions P(Y(t)∣X(t)) follow an ODE. Rather, it assumes that if continuous temporal domains exhibit gradual concept drift, then the conditional probability distributions P(Y(t)∣X(t)) change continuously. In this context, 'change continuously' means that the distribution changes are smooth and incremental. This assumption of continuity is reasonable because, in the field of concept drift research, gradual (or incremental) concept drift is considered to be characterized by small, incremental changes[R1]. While in real-world gradual concept drift is more complex and may imperfectly fit the continuity assumptions, here we do not lose much generality, as the underlying factors leading to gradual concept drift are primarily continuous physical, biological, social, or economic processes.
2. Given the continuity of gradual concept drift, it is appropriate to use differential equations as a tool for modeling. Differential equations provide a mathematical framework to capture the essence of continuous processes. They are particularly useful when the system's evolution can be represented by smooth and continuous dynamics. We chose to model the continuity of model parameters using ODEs as an initial attempt because ODEs offer a simple yet clear and flexible toolset for understanding how systems evolve over time. While it is possible to replace ODEs with more advanced differential equations (e.g., Partial Differential Equations (PDEs), Stochastic Differential Equations (SDEs)) to incorporate factors such as noise and uncertainty, this may detract from the key focus on modeling the fundamental continuous processes. Indeed, our experiments on both synthetic and real-world data have shown that even the simplest ODEs are enough to achieve a pretty significant success.
Discussion of Limitation:
1. The limitation regarding capturing abrupt drift arises from the assumptions inherent in the research area of Temporal Domain Generalization (TDG) framework. TDG assumes that domain pattern drifts follow certain predictable,smooth patterns, allowing for modeling future changes as a sequence. CTDG extends TDG to continuous time, further generalizing the problem to handle domains collected at arbitrary times. Thus, their assumptions are aligned in focusing on smooth, predictable concept drift.
2. To address abrupt drift, alternative frameworks are required. These include domain-invariant learning in static models, or ensemble learning/importance weight sampling with a detect-and-restart strategy. These methods operate under different assumptions and frameworks compared to TDG and CTDG. It can be challenging to determine which model is superior without considering the specific application scenario, as different models come with different implicit biases.
3. However, different frameworks can be combined to address the full spectrum of domain generalization challenges. Our approach does not conflict with methods designed to handle abrupt drift, such as ensemble learning or detect-and-restart strategies. This integration could create a more robust and adaptable system. Exploring such a comprehensive approach may represent a promising direction for future work.
> W2. the paper does not seem to follow the checklist; there is no justification for each points and if I am not mistaken, some "Yes" answer are not correct e.g. a Broader Impact section is missing.
We apologize for our all answers are yes. We review and justify each point in the checklist to ensure compliance.
1. Broader Impact: There is no potential negative societal impact is found in this work. our research is foundational research and not tied to particular applications.
2. Q11, Yes should be NA.
3. Q13, Yes should be NA.
4. Q14, Yes should be NA.
5. Q15, Yes should be NA.
> Q1. How does it compare to domain invariant approaches (e.g. IRM, V-REx etc.)? A domain-invariant approaches might be better at handling abrupt changes.
1. We implemented IRM and V-REx, we also implemented an advanced adversarial learning-based baseline CIDA [42] for comparison. CIDA is a powerful strategy built upon adversarial learning. It enhances domain-invariant representations by considering the distance between domain indexes.
2. Results are shown in Table 4 (in rebuttal PDF), demonstrating that our proposed Koodos performs competitively against these methods. The effect of Koodos is still by a huge margin.
3. However, as we discussed before, different frameworks can be combined to address the full spectrum of domain generalization challenges. We think exploring a comprehensive system is a promising future direction.
[R1] Learning under Concept Drift: A Review, TKDE 2018
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification w.r.t. assumptions and the additional evaluation of domain invariant approaches; I increased my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer nh98
Comment: Thank you for your positive feedback and for acknowledging our work. We appreciate your decision and are glad that our responses helped address your concerns. | Summary: The article introduces Continuous Temporal Domain Generalization (CTDG), which extends traditional Temporal Domain Generalization (TDG) methods by addressing the challenges posed by continuous and irregularly spaced temporal domains. The authors propose a Koopman operator-driven framework (Koodos) to handle data evolving continuously over time and collected at arbitrary intervals. The framework aims to capture continuous dynamics in data and models, optimize generalization across continuous temporal domains, and leverage prior knowledge of dynamic patterns. The experiment results demonstrate the efficacy of this approach compared to some existing methods.
Strengths: The primary contribution is the introduction of the CTDG problem and the Koodos framework, which extends TDG to handle continuous and irregular temporal domains. The paper also provides a comprehensive optimization strategy and theoretical analysis to support the proposed approach.
Weaknesses: The method assumes data dynamics can be well-characterized by the Koopman operator. While powerful for linearizing nonlinear dynamical systems, its effectiveness across various domains (e.g., finance, healthcare, social media) requires more extensive empirical studies, especially for highly noisy or chaotic systems. The proposed hybrid loss function's complexity may present challenges in training and convergence. A detailed analysis of its behavior and optimization strategies would be beneficial. The method introduces additional complexity and hyperparameters. A comprehensive discussion on hyperparameter sensitivity and guidelines for setting loss weights would enhance the method's practical applicability.
Theorem 2, which compares continuous-time and discrete-time approaches in approximating temporal distribution drift, does not directly address generalization bounds or risks in domain generalization. While it provides insights into error accumulation over time, it lacks analysis of the model's performance on unseen temporal domains.
The experimental comparisons are somewhat limited. Only one continuous baseline is used, and it appears that key comparison metrics are missing. Methods like AdaRNN[1] and Diversify[2], which aim to address the same issue, are not discussed or compared. More comparisons to state-of-the-art discrete TDG methods adapted to the continuous case would strengthen the evaluation.
[1] AdaRNN: Adaptive Learning and Forecasting of Time Series
[2] Diversify to Generalize: Learning Generalized Representations For Time Series Classification
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In typical domain generalization challenges, the objective is to align the joint distribution \(P(X, Y)\) across different domains. For Temporal Domain Generalization (TDG), this would extend to aligning \(P(X(t), Y(t))\) at different time points. How does Koopman Theory address this issue in aligning these temporally varying distributions?
2. Authors focuses on the existence of continuous and relatively stable dynamic changes across continuous time, how effective is this method when dealing with continuous but non-periodic or irregular data variations or if a new dataset significantly differs in its statistical properties from the training data? is there potential to integrate other mechanisms or models to enhance the handling of such substantial data changes?
3. In the Methods section, you emphasize addressing changes in the conditional probability distribution over time. However, the datasets used, such as Rotated MNIST, Rotated 2-Moons, Yearbook, and Cyclone, are commonly associated with covariate shifts rather than shifts in the conditional probability distribution. These datasets appear to maintain a stable relationship between the labels and the input data or a latent feature space. Could you clarify how the changes in the conditional probability distributions are represented or accounted for in these datasets?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There is no potential negative societal impact is found in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing.
> W1. needs more empirical studies.
>
1. The Koopman operator is proven theoretically that can linearize any nonlinear dynamical system in [4].
2. Our empirical evidence supports its applicability across various domains. Datasets are summarized in Tab. 5
- Experiments span a wide range of real-world domains (7 in total): social media, culture, climate, economics, business, physics, and energy.
- Experiments includes various modalities (tabular, image, text) and tasks (classification, regression).
- Experiments use high-dimensional and complex datasets. : Yearbook (32x32=1024 pixels), Cyclone (64x64x2=8192 pixels).
- Datasets from complex, noisy real-world sources: Influenza (millions of tweets for flu prediction), Cyclone (satellite images for weather systems).
3. Koodos demonstrates robustness and significant success across all 12 datasets, which evidence supports the effectiveness of it and highlights its applicability across various domains.
> W2. loss function
We conducted three rounds of convergence tests on the 2-moons dataset. Fig. 11 (rebuttal) shows that:
1. The loss function and its terms converge effectively, showing Koodos' robustness and ease of training.
2. Each term in the loss functions is easy to train and converge using off-the-shelf optimizers such as Adam without requiring special optimization tricks.
3. Ablation tests (Section A 1.5) show performance drops when any term is removed, justifying their inclusion.
> W3. method complexity. A hyperparameter sensitivity.
The loss weights are designed to balance each term's magnitude, ensuring no term dominates training. for example, 2-moons is set 1,100, and 10 as each loss around 1,0.01,0.1. Koodos is designed to be simple, comprising only a Koopman Network and a differential equation solver. We conducted sensitivity analysis, shown in Fig.10
1. Koodos exhibits stable behavior and robust convergence across wide varied hyperparameters, evidencing its ease of training, robust convergence, and insensitivity to hyperparameters.
2. Setting the loss weights in accordance with the magnitude of each loss term is sufficient for achieving good performance.
3. A few hundred Koopman operator dimensions are sufficiently good;
> W4. Theorem 2
We analyze the risks associated along the time line based on the cumulative error on the training set, and it will propagate to the test set. So the cumulative error on the test set will also be smaller.
> W5. comparison limited. AdaRNN Diversify not discussed. Including more TDG
1. AdaRNN and Diversify address different problems compared to CTDG/TDG:
- **Different Objective**: The core problem AdaRNN and Diversify solved is creating domain-splitting algorithms to solve the problem of lacking predefined domain divisions in time series classification datasets. In contrast, TDG/CTDG have predefined domains, with the primary challenge is capturing the evolutionary pattern among temporally sequential domains and generalize the pattern to future domains.
2. We recognize that AdaRNN and Diversify used adversarial learning strategy DANN. Therefore, we implemented CIDA, an advanced adversarial learning baseline compared to DANN. We also implemented IRM and V-REx to together show domain-invariant learning effectiveness. Results in Tab. 4 show none of these methods having fair results.
3. We implemented the state-of-the-art discrete TDG method TKNet. Results in Tab. 4 indicate it does not handle CTDG. Notably, currently no more continuous TDG methods are available for comparison.
> Q1. How does Koopman Theory align temporally varying distributions?
1. TDG differs from traditional DG. TDG assumes there is widespread smooth, predictable distribution changes over time, so the field aims to predict future distributions and models by building dynamic models with time-varying parameters, **aligning the model function with predicted future domain distributions**. CTDG extends TDG, relaxing the requirement for fixed-interval discrete-time domain collection and test domain not limited to immediate one-step future domains. Defining like this challenging because it breaks the discrete-time models (e.g., state-space models, LSTM) that traditional TDG relies on. Therefore, we theoretical proof that the parameter of time-varing model for continuous temporal domains is also continuous, leading to a dynamic predictive model described by differential equations.
2. Solving differential equations analytically is impossible. So we use Koopman operator to simplifie learning these complex differential equations, enabling effective numerical solutions.
> Q2. How deal with non-periodic/irregular data variations/significantly change? Can we integrate other mechanisms?
1. TDG and CTDG assume future predictability. If domain changes are irregular or unpredictable, this violates the model's biases. Since different models have different implicit biases, we need to find a suitable frameworks whose prior assumptions match the setting, such as domain-invariant learning, ensemble learning, or importance weight sampling with restart.
2. Our approach complements other DG methods. For example, a static feature extractor from domain-invariant learning can be used in Koodos, or equipted Koodos to onlining system with detection-and-restart strategies. Exploring comprehensive systems is a promising future direction.
> Q3. Is datasets are with covariate shifts?
The datasets we used represent changes in P(Y∣X) and are widely recognized in the TDG field:
Rotated 2-Moons: Labels change with rotation, e.g., point (1,0) changes from label 0 to 1 after 180 degrees. Rotated MNIST: Rotating '6' to look like '9' should still be labeled '6'. Yearbook: Changing fashion trends cause the same features (e.g., clothing styles) to have different gender labels over time. Cyclone: Changes in atmospheric pressure or temperature cause similar satellite images to represent different storm intensities.
---
Rebuttal Comment 1.1:
Title: Follow-Up on Rebuttal Response
Comment: Dear Reviewer Drn2,
We greatly appreciate your time to review our paper and your valuable comments.
We noticed that we haven’t yet received your response and wanted to kindly inquire if there’s anything further we can do.
We have made extensive clarifications, added detailed discussions, and provided additional experimental results as you suggested. We hope we have effectively addressed your concerns and clarified any potential misunderstandings.
We are eager to hear your thoughts on the efforts we have made during the rebuttal period.
Thank you once again for your invaluable review.
---
Rebuttal Comment 1.2:
Comment: Thanks for the thorough reply from the authors and it has addressed most of my primary concerns. I have increased my score accordingly.
---
Reply to Comment 1.2.1:
Title: Response to Reviewer Drn2
Comment: Thank you for your thoughtful feedback and valuable time. We’re glad to hear that our reply has addressed your concerns. We appreciate your decision to increase the score and are grateful for your constructive input throughout this process. | Summary: The paper presents a novel approach called Continuous Temporal Domain Generalization (CTDG), addressing the challenge of training predictive models under continuously evolving and irregularly observed temporal domains. Unlike traditional TDG methods that rely on discrete time intervals, CTDG captures continuous dynamics of both data and models. The proposed Koopman operator-driven continuous temporal domain generalization (Koodos) framework leverages Koopman theory to learn underlying dynamics and enhances it with a comprehensive optimization strategy. Extensive experiments demonstrate the effectiveness and efficiency of the proposed approach.
Strengths: The paper introduces a novel problem (CTDG) and proposes an innovative solution (Koodos) leveraging Koopman theory.
The methodology is sound and well-executed. And it is well-supported by robust theoretical foundations and demonstrates excellent performance in the experimental results.
Weaknesses: The explanation of how the article addresses domain changes with arbitrary temporal sampling is not very clear.
There is some overuse of formula characters, and the use of L_pred2 seems rather arbitrary.
Additionally, the numerous hyperparameters in the loss function increase the cost of tuning for different datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: The article repeatedly emphasizes the importance of "C" in CTDG and the arbitrary selection of time t. However, does the proposed method specifically address this aspect, or does it primarily rely on Koopman theory?
In the methodology section, the article mentions that domain conditional probability distributions will not experience abrupt changes.
However, this situation holds true only when the sampling time intervals are equal. The method addresses arbitrary sampling; does this imply it also deals with abrupt changes?
Notably, in equation (15), the weights of L_pred and L_pred2 are both α, the weights of L_recon and L_consis are both β, and the weight of L_dyna is γ. Could you explain the rationale behind this? How are these hyperparameters set for different datasets, and what is the basis for their selection? Do different weights significantly affect the results?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing and acknowledging our work! Please read along with the Rebuttal PDF.
>W1. The explanation of how the article addresses domain changes with arbitrary temporal sampling is not very clear.
1. TDG assumes there is widespread smooth, predictable distribution changes over time, so the field aims to predict future distributions and models by building dynamic models with time-varying parameters, aligning the model function with predicted future domain distributions. CTDG extends TDG, relaxing the requirement for fixed-interval discrete-time domain collection and test not limited to immediate one-step future domains.
2. Inferring future model parameters under CTDG is challenging. In our work, we prove that if the domain distribution changes continuously, then the corresponding predictive model parameters should also change continuously. This insight allows us to formulate a differential equation for the predictive model parameters. Through the differential equation, we can predict future model parameters at any continuous time.
3. However, analytically solving this difference equation is almost impossible. Note that the Koopman operator allows us to linearize the nonlinear dynamics of the difference equation by mapping the original nonlinear predictive model parameter space into a higher-dimensional linear space. By leveraging this operator, we can effectively solve the differential equations numerically, ensuring that the model parameters evolve precisely over time and generalize well to the far future.
4. Finally, the learned differential equation will directly compute the predictive model parameters in any specific future tume.
>W2. There is some overuse of formula characters, and the use of L_pred2 seems rather arbitrary.
1. Before submitting for review, we carefully checked the use of symbols and tried our best to make them concise and clear. We apologize for any confusion. We will carefully review the manuscript again and try to streamline the notation to ensure greater clarity and readability.
2. The loss function L_pred2 is used to quantify the prediction error of dynamically integral parameters performed on domain data. L_pred2 works together with L_pred to constitute the evaluating term of the integral and intrinsic model parameters on the domain task. We acknowledge that the notation might be unclear. To improve understanding, we will rename L_pred and L_pred2 to L_intrinsic and L_integral, respectively. This change will more accurately reflect their roles.
> W3 & Q3. Additionally, the numerous hyperparameters in the loss function increase the cost of tuning for different datasets.
The loss weights are just designed to balance the magnitude of each loss term, ensuring that no single term dominates the model's training process.
1. We conducted a sensitivity analysis to understand the hyperparameters in Koodos: loss weights $(\alpha, \beta, \gamma)$ and the dimensions $n$ of the Koopman operator $\mathcal{K}$. Fig. 10 (in rebuttal PDF) shows the results.
2. The loss weights are set based on the magnitude of each loss term. For instance, in the 2-Moons dataset, the cross-entropy loss $L_{pred}$ and $L_{pred2}$ are approximately 1 after convergence, the $L_{recon}$ and $L_{consis}$ in the Model Space are around 0.01, and $L_{dyna}$ in the Koopman Space is around 0.1. Accordingly, we set the initial values of $\alpha, \beta, \gamma$ to 1, 100, and 10, respectively. We then adjust each weight term independently within its respective range: $\alpha$ and $\gamma$ are varied within 1 to 100, and $\beta$ within 10 to 1000. The dimensions $n$ of the Koopman operator vary within the range of 16 to 2048.
3. It can be seen that:
- Koodos exhibits stable behavior and robust convergence across wide varied hyperparameters, evidencing its ease of training, robust convergence, and insensitivity to hyperparameter.
- Setting the loss weights in accordance with the magnitude of each loss term is sufficient for achieving good performance.
- A few hundred Koopman operator dimensions are sufficient good; too many can lead to overfitting.
>Q1. The article repeatedly emphasizes the importance of "C" in CTDG and the arbitrary selection of time t. However, does the proposed method specifically address this aspect?
1. As we discussed in W1., CTDG extends TDG, relaxing the requirement for fixed-interval discrete-time domain collection and test not limited to immediate one-step future domains. Defining tasks in continuous time is challenging because it breaks the discrete-time models (e.g., state-space models, LSTM) that traditional TDG relies on.
2. Therefore, we theoretical proof that the parameter of time-varing model for continuous temporal domains is also continuous, leading to a dynamic predictive model described by differential equations.
3. Solving differential equations analytically is almost impossible. The Koopman operator simplifies learning these complex differential equations, enabling effective numerical solutions.
4. By the learned differential equations, the parameters of the predictive model can be calculated at any specific time using ODEssolvers.
>Q2. In the methodology section, the article mentions that domain conditional probability distributions will not experience abrupt changes. However, this situation holds true only when the sampling time intervals are equal. The method addresses arbitrary sampling; does this imply it also deals with abrupt changes?
The assumption that domain conditional probability is continues changes is based on the nature of the underlying processes being continuous. The observation intervals, whether equal or arbitrary, do not influence the continuity of the underlying processes themselves. For example, the temperature change throughout a day is a continuous process, regardless of whether measurements are taken every hour or at irregular intervals. The continuity of the temperature change remains inherent to the process itself.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comprehensive response and my main concerns are well resolved. Thus, I decided to increase my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer xaRP
Comment: Thank you for your thoughtful consideration and for recognizing our efforts. Your feedback has been invaluable in refining our work. | Summary: This paper introduces a new task: Continuous Temporal Domain Generalization (CTDG) to address the limitations of traditional TDG in handling continuously evolving and irregularly observed temporal data. By proposing the Koopman operator-driven framework (Koodos), this work leverages Koopman theory and optimization strategies to learn and control the continuous dynamics of data and models. Experiments demonstrate the effectiveness and efficiency of Koodos in managing complex high-dimensional nonlinear dynamics and generalizing across continuous temporal domains.
Strengths: 1. This paper introduces a valuable new task, Continuous Temporal Domain Generalization (CTDG), which moves beyond the discrete nature of traditional TDG tasks, enhancing their relevance to real-world applications.
2. The paper proposes the Koopman operator-driven framework (Koodos), effectively leveraging Koopman theory and optimization strategies to understand and control the continuous dynamics of both data and models.
3. It expands the evaluation setting of traditional TDG tasks by introducing datasets with truly continuous data and temporal distribution shifts. This setting encompasses the previous discrete TDG evaluation as a special case, providing a more comprehensive evaluation framework for all TDG methodologies.
4. The authors further evaluate their method and other TDG baselines under this new CTDG evaluation setting, achieving significantly improved performance.
5. The paper is well-written and well-organized.
Weaknesses: 1. Evaluation is limited to small datasets and models, lacking assessment on higher-dimensional datasets, which limits the practical applicability of proposed benchmarks and obscures performance in scenarios with large models and high-dimensional data.
2. The continuous nature of CTDG proposed here appears similar to task settings in CIDA and AdaGraph, which focus on domain adaptation. This reduces the novelty of the task setting in this paper, necessitating further discussion and experimental comparisons for clarification.
3. TKNets, another Koopman theory-based TDG method, is relevant but lacks clear differentiation through discussion and experimental comparisons.
4. Including information on the training and inference costs of each method would enhance the completeness of the study.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does CTDG still adhere to the assumption of smooth distribution shifts?
2. Is the quantitative evaluation in the paper still performed on the last domain?
3. Regarding the Yearbook dataset under CTDG evaluation settings, is it essentially still discrete but can be approximated as continuous due to its large number of domains? If so, which other datasets in CTDG use dense discrete domains to approximate true continuous data?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See my weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for reviewing and acknowledging our work! We see your main concerns are about related works and evaluations. Please read our answers along with the Rebuttal PDF.
> W1. Limited evaluation on small datasets/models; lacks high-dimensional datasets.
1. We provide a summary of the datasets and predictive models used in our work in Tab. 5. From it, it is evident that we have utilized high-dimensional datasets and modest-sized datasets and models in our evaluation: the Yearbook dataset comprises around 20k images with 32*32=1024 pixels, and the model used for this dataset has approximately 163k parameters. Similarly, the Cyclone dataset, with image dimensions of 64x64x2=8192, is tested with a model having 135k parameters.
2. As a comparison, the existing discrete TDG datasets benchmark only uses data below 100 dimensions, and the minimum size of the model parameter is 12k.
3. Combined with the intrinsic challenge in TDG field, further increasing model size to a very large scale (e.g., LLMs) is a promising open area that is considered as a great future direction.
> W2. CTDG's continuous nature resembles CIDA and AdaGraph settings, needing further discussion and comparisons.
1. Our work differs from their setup in the following points:
- **Different Fields**: CIDA and AdaGraph focus on domain adaptation with access to target domain data during training. CTDG extends TDG and focuses on domain generalization without accessing target domain.
- **Different Semantics**: the term 'continuous' is used differently in each work. CIDA: Domain index as a continuous variable instead of a categorical variable to aid the discriminator with a distance-based loss. AdaGraph: Continuous arrival of test data and online adaptation. CTDG: the temporally sequential domains are collected on **continuous-time** instead of fixed interval discrete-time.
- **Different Objectives**: CIDA and AdaGraph follow traditional DA/DG settings without modeling temporal evolution dependence like CTDG. Although modeling such temporal patterns has been theoretically shown to have a lower generalization bound [50].
- CTDG's novelty lies in its unique requirenment to capturing temporal dependencies and generalizing in continuous-time temporal domains, a challenge not addressed by CIDA, AdaGraph, or existing TDG methods.
2. Modifications to CIDA's then can apply to our dataset. We implemented it. Tab. 4 shows that CIDA cannot deal well with the CTDG problem.
> W3. TKNets need further discussion and comparisons.
1. Our work and TKNets have different starting points, frameworks, and contributions.
- **Focus and Motivation**: Our work addresses domain evolution over continuous-time, which existing TDG methods, including TKNets, fail to capture effectively as they rely on discrete-time models (e.g., state-space model, LSTM).
- **Central Contribution**: Our primary contributions are theoretical proof that the model for continuous temporal domains is also continuous, based on which we construct a dynamic system described by differential equations. The Koopman operator is used to simplify learning complex differential equations. In contrast, TKNets' core contribution is proposing to use the Koopman operator to model the domain state transition matrix, serving a different role from ours.
2. We implemented TKNets as a baseline for the CTDG task. Results can be checked in Tab. 4 and it shows TKNets cannot deal well with the CTDG problem.
> W4. Including the training and inference costs would be better.
We have added related experiments; please check Tab 6. Our model strikes a good balance between training time and effectiveness.
> Q1. Does CTDG still adhere to the assumption of smooth distribution shifts?
1. Yes, CTDG extends TDG and retains the smooth assumption.
2. Moreover, CTDG mitigates two key assumptions of traditional TDG:
- train domains are collected at fixed time intervals without missing, e.g., t=1,t=2,t=3
- test domains appear only at a time interval in the immediate future, e.g., t=4
3. Unlike these assumptions, CTDG allows training domains to be collected at arbitrary times, e.g., t=1.2, t=2.432, t=5.4693…, and adapts to test domains that appear at any point in the future, e.g., t=6.324, t=8.3, t=9.45, not limited to immediate subsequent steps, nor limited to the future one domain.
> Q2. Is the quantitative evaluation in the paper still performed on the last domain?
1. No. As we answered in Q1, one important task of CTDG is to generalize the model to any future point. Therefore, we perform evaluations in multiple test domains arbitrarily and irregularly distributed after the training period.
2. For the number of test domains for each dataset, one can refer to the Tab. 5, while the time distribution of these multiple test domains can be found in Figure 6, marked with gray shading.
> Q3. Is the Yearbook dataset discrete but dense as continuous? How about others?
1. No. We do not use dense discrete domains to approximate continuous data. All datasets are designed to be consistent with real-world CTDG case situations: (1) Discrete-Time Domains but with Missing Domains; (2) Event-Driven Domains; (3) Streaming Data with Arbitrary Data Collecting Times.
2. All three of these scenarios require CTDG. Therefore, the datasets are designed as follows:
- The YearBook dataset represents (1). The raw data is collected by years; we randomly sampled 40 years from 83 years to represent the incomplete temporal domain collection process.
- The Cyclone dataset represents (2). When each cyclone occurs, the satellite collects a series of images for its entire life cycle, with the date of its occurrence representing a temporal domain time.
- Situation 3 is simulated by setting multiple randomly started data collection windows on the tweet stream and the price stream, to create Influenza dataset and House dataset.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: I appreciate the authors' rebuttal, which has addressed some of my major concerns. As a result, I'll slightly increase my rating. However, I'm not entirely confident about this evaluation, particularly regarding the assessment of the methodology. I may further reassess after considering the discussions between the authors and other reviewers. I would be grateful if the authors could address the following concerns:
## 1. Necessity and Urgency of Improvements
While I acknowledge that the improvements in task setting and problem modeling for TDG are necessary, I question their urgency. Given that most current TDG datasets are relatively toy-like, with low dimensionality and limited to specific scenarios with relatively simple temporal distribution shifts:
- Is the main bottleneck in applying TDG to real-world applications still on the temporal aspect?
- Will the proposed modeling and methods hold in more complex scenarios?
## 2. Dataset Realism
I appreciate the inclusion of Yearbook and Cyclone datasets, which are indeed more realistic than previous TDG datasets. However, I'd like to point out that these datasets are still quite simple in terms of:
- Data dimensionality
- Dataset size
- Distribution shift complexity
## 3. Validation of Improvements
While the design intuitions for the improvements seem reasonable and meaningful, I feel the authors haven't used appropriate datasets to motivate and validate the value of these improvements. Using Yearbook as an example:
- High school graduation photos are typically taken at a fixed time each year
- Is there a significant difference between continuous-time and fixed-interval discrete-time in this context?
## 4. Theoretical Contributions
The authors state, "Our primary contributions are theoretical proof that the model for continuous temporal domains is also continuous." However:
- Didn't we already have such insights or proofs from DRAIN or GI?
- I acknowledge that formally proving this point is a major contribution if previous work hasn't done so.
## 5. Comprehensiveness of Comparisons
Regarding comparisons with other TDG methods:
- Are the comparisons comprehensive?
- Could we improve the performance of these methods in the CTDG setting by dividing them into denser discrete temporal domains?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 2Dt9 Comments
Comment: Thank you for your continued engagement and for acknowledging the improvements! We appreciate your thoughtful feedback.
> D1.
> - Is the main bottleneck of TDG application on the temporal aspect?
> - Will the proposed methods hold in more complex scenarios?
1. The temporal aspect remains a significant challenge in the field of TDG. Existing TDG models struggle with flexibility in terms of times, e.g., domains in arbitrary times, and domains in further future (than just the next time point), revealing **a critical gap in the ability to abstract accurate temporal dynamics from temporal domain data**. However, our approach alleviates this challenge by leveraging model dynamic systems and Koopman theory, representing a significant advancement in capturing the temporal dynamics of TDG and marking an important research direction worthy of further exploration.
2. Our methods hold in more complex scenarios. Real-world changes are often driven by continuous processes and the domains of interest can be at arbitrary times, which align with our approach’s foundational assumptions. The continuous nature of these changes means that our methods remain robust even as scenarios increase in complexity.
> D2. Dataset Realism
1. Thank you for acknowledging our use of the Yearbook and Cyclone datasets as a step towards more realistic TDG datasets. We appreciate your recognition of our improvements in this area.
2. We fully acknowledge that larger, higher-dimensional datasets would provide stronger empirical validation. While TDG/CTDG is still an emerging field with developing benchmarks, our endeavors are aimed to push this community forward progressively towards larger scale, more realistic data via this and subsequent works.
> D3. Yearbook taken at a fixed time each year. Is there difference between continuous-time and discrete-time of such?
1. The factor in choosing between CTDG and traditional TDG models is not whether data is collected at integer time points, but **whether the time intervals between data collections are fixed or not**. Traditional TDG models require equal intervals because they don't account for time duration amount between consecutive domains. Therefore, when the time intervals are variable, TDG models are blind to such variation. In contrast, continuous-time models consider these durations between consecutive domains and hence can handle domains at arbitrary time points.
2. The Yearbook dataset used in the study samples 40 years over an 83-year span, introducing variability in time intervals. Traditional TDG models cannot capture such variability, while CTDG models can. It is a significant difference that should not be ignored.
> D4. Do we have continuous proofs from DRAIN or GI? If no, proving this is a major contribution.
Thank you for acknowledging our theoretical contribution.
1. The GI introduced the intuitive idea that encouraging learning functions that can be approximated linearly may improve a model's generalization ability. However, **this was demonstrated through empirical experiments without theoretical proof**.
2. Moreover, while GI enforces local linearity properties of the model, it does not explore the exact mechanisms of how future model parameters can be determined. In contrast, our Theorem I provides a deep understanding of how CTDG models can be computed and generalized.
3. DRAIN suggests that future model parameters are conditional on parameter statuses from historical domains, **but it does not assume or require continuity in these parameters**.
> D5.
> - Are the comparisons comprehensive?
> - Could we improve TDG model in CTDG by dividing them into denser discrete temporal domains?
1. We have compared widely with well-recognized TDG methods GI, DRAIN, and TKNets using 12 datasets in both continuous and discrete domains under 3 metrics. Our method shows significantly improved performance in all these comprehensive comparisons, clearly demonstrating its effectiveness.
2. TDG methods underperform in CTDG settings because they can't sense the variability of time duration amount between consecutive domains, which is an important addition of CTDG over TDG.
3. If we turn CTDG tasks into TDG tasks, we have to interpolate additional domains for the intermediate time points between the time points in CTDG domains. For example, if CTDG domains occur at times (1.1, 2.3, and 5.1), TDG would require interpolating domains at (1.1,1.5,1.9,2.3,2.7,3.1,3.5,3.9,4.3,4.7,5.1) in order to form fixed interval for TDG to apply. Generating the entirety of the data in the domain of each time point is arbitrarily challenging especially considering the size, dimensionality, and complexity of the data is nontrivial and can be larger and larger. And such interpolation will introduce extra uncontrollable error. Moreover, this interpolation also significantly reduces efficiency, potentially making the process impractically slow, as the number of domains could multiply drastically. | Rebuttal 1:
Rebuttal: Thank you to each reviewer for their valuable time.
Please download the pdf file before reviewing the responses.
Best,
Authors
Pdf: /pdf/f7a1110a43d2cc9c7f0c49aad29c34dcf34b12fe.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment | Accept (poster) | Summary: The authors proposed a novel pipeline for once-for-all training multiple subnets in a supernet LLM under different resource constraints. The entire pipeline consists of a knowledge preserving subnet selection utilizing DP to sample depth and width and a new LoRA to resolve the gradient conflicts during training multiple subnets jointly. The experimental results on two widely used decoder-only architectures show the efficacy of the pipeline.
Strengths: - The paper is well-motivated and well-written. To deploy the LLM on multiple resource-constrained devices, fine-tuning multiple subnets jointly is a promising direction to save the computation and training efforts.
- Resolving the complexity when jointly selecting the removal layers in layer-wise pruning is a remaining question for a long time. Although the authors did not entirely solve the problem, the proposed DP solution may potentially also benefit other layer-wise network architectures, e.g., CNN, encoder model. etc.
Weaknesses: - Some technical details are not clear to me. For example, how are the depth and width defined in decoder models? Is a self-attention block a layer? Or does the authors consider FFN and MHSA as two individual layers? In terms of the width, it is clear for FFN but is that the number of heads in MHSA?
- What metric is used as the importance in subnets sampling? I assume that that importance is not derived from gradients, since DP is done layer-by-layer in forward manner. If so, will that metric work better than gradient-based metrics?
- The authors introduce a new LoRA in the paper. Does the authors freeze the backbone LLM and only tune the adapters? If not, how does the authors avoid the co-adaptation of backbone and adapters? Since the original LoRA is proposed to preserve the learned knowledge and fine-tuning only low-rank adaptation for a small amount of new data.
- Also are the adapters only added in MHSA? If so, can the authors provide some theoretical or experimental results to show why FFN layers do not suffer from gradient conflicts?
Technical Quality: 3
Clarity: 3
Questions for Authors: See above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the value of our work and for your constructive comments! We have addressed all your concerns below.
**1. The definition of depth and width in decoder models**
Thank you for pointing this out! In this work, depth is defined as a whole self-attention block, including both an MHSA and an FFN.
For width, your understanding is correct. Specifically, for FFNs, the width is defined as channels. For MHSA, the width is defined as the number of attention heads, since all channels within one attention head will be kept or pruned as a whole for real-device efficiency, following the definitions and strategies in previous structured LLM pruning works ([6][8] cited in our manuscript).
We will clarify this in the final version.
---
**2. The metric used as the importance in subnets sampling of our DP algorithm**
As elaborated in Section 5.4 of our manuscript, we adopt the MMLU accuracy, which serves as an indicator for the encoded factual knowledge, as the DP metric for subnet selection. This is based on the key insight that the loss of factual knowledge during compression is hard to restore during fine-tuning and thus should be prioritized in subnet selection, while the language modeling capability is easier to recover through fine-tuning.
In addition to the benchmark between our DP algorithm and existing layer pruning methods in Table 2 of our manuscript, where our method consistently outperforms all baselines, we follow your suggestion to further benchmark our DP algorithm with gradient-based metrics. Specifically, the gradient-based metric we tried is defined as the average gradients received by all MLPs falling within one layer (defined as a whole self-attention block), where layers with lower average gradients are potentially less important and thus pruned. Following the evaluation setting of Table 2 of our manuscript, we summarize the achieved Wikitext2 PPL and MMLU accuracy when remaining different numbers of layers on LLaMA 2 7B in the table below.
| Calib. Data | # Remained Layers | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Wikitext2 (PPL) | Gradient-based | 12.689 | 16.394 | 21.813 | 25.209 | 67.262 | 326.666 | 575.863 | 2295.172 | 2140.926 |
| | Ours | 11.66 | 12.77 | 17.59 | 20.06 | 23.77 | 28.83 | 38.70 | 70.87 | 95.16 |
| MMLU (Acc %) | Gradient-based | 24.4 | 26.2 | 24.9 | 24.6 | 23.9 | 234.0 | 24.8 | 24.8 | 24.2 |
| | Ours | 46.2 | 44.8 | 44.6 | 44.1 | 41.2 | 41.3 | 43.1 | 34.7 | 28.9 |
We can observe that our method consistently outperforms gradient-based metrics. According to our analysis in Section 4.2, this may be because (1) gradient-based metrics cannot effectively evaluate the joint contributions of different layer combinations like our DP method; and (2) gradient-based metrics cannot effectively reflect the encoded factual knowledge.
---
**3. Whether the backbone is frozen and only the adapters are tunable**
Yes, we freeze the backbone and only tune the adapters. This is because we find that one-for-all fine-tuning of the whole backbone on low-resource fine-tuning data results in suboptimal performance of particular subnets, as shown in Table 3 of our manuscript.
---
**4. Where the adapters are added**
We follow the settings in QLoRA [1] to add adapters to all linear layers in the pre-trained LLM, including both MHSAs and FFNs.
To study the relative importance of adapters in MHSA and FFNs, we further explore variants where adapters are added only to one of them on top of our method and summarize the results in the table below.
| Remaining Ratio | Attn only | | FFN only | | Attn + FFN | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | Wikitext2 | MMLU (%) | Wikitext2 | MMLU (%) | Wikitext2 | MMLU (%) |
| 80% | 13.31 | 44.6 | 12.14 | 45.6 | 10.34 | 47.4 |
| 65% | 14.11 | 38.3 | 13.63 | 39.1 | 11.76 | 40.3 |
| 50% | 19.23 | 32.5 | 16.98 | 33.1 | 15.58 | 34.0 |
We find that (1) adding adapters to both layers achieves the best task performance, and (2) adding adapters to FFNs results in better task performance compared to adding them to MHSAs. This indicates that FFNs may play a more effective role in adjusting the output token distributions after compression.
[1] “QLoRA: Efficient Finetuning of Quantized LLMs”, T. Dettmers et al., NeurIPS’23.
---
Rebuttal 2:
Comment: Dear Reviewer,
We sincerely appreciate the time you dedicated to providing valuable feedback on our paper. In this author response, we have addressed all of your initial concerns. If you have any further questions or concerns, we are happy to discuss them with you. Additionally, we welcome any new suggestions or comments you may have!
Best,
The Authors of Paper #13852 | Summary: This paper proposes AmoebaLLM: a one-for-all fine-tuning and compression framework for delivering pruned and accurate subnets from a pre-trained LLM at various pruning (both depth and width) ratios without the need to fine-tune individual subnets. AmoebaLLM consists of two components: 1) a dynamic programming-based approach to layer selection for selecting subnets of certain depth that takes into account various layer interactions, as opposed to greedy-layer based approaches; 2) after the subnets are defined, a one-for-all fine-tuning strategy is proposed based on gating multiple LoRA heads that are subnet-specific. The training strategy uses a hybrid loss where the largest subnet (always sampled) computes its loss with the ground truth, and all subsequently sampled subnets compute a distilled loss from the largest subnet. The final loss is computed by weighing each subnet's loss appropriately. To demonstrate the efficacy of their method, the authors perform experiments on both LLama 2 7B and Vicuna 7b v1.5 and compare against prior LLM pruning techniques.
Strengths: - The paper tackles a challenging and relevant problem: compressing pre-trained LLMs with various complexities while maintaining their performance
- The paper is well written and easy to follow
- The method overall is novel, and performs better than other pruning methods.
- The paper provides interesting insights into how different LLM metrics are impacted by model compression, e.g., Sec 5.4 shows that MMLU is a better metric for calibration.
Weaknesses: - The claims that full model fine-tuning suffers from severe gradient conflicts appear rather weak. In line (233) the authors state "As analyzed in Sec. 2 and demonstrated in Sec. 5.3 ... ", whereas Section 2 simply conjectures that this is an issue as it uses phrases like" .. would likely fail .." and "as it may omit layers". I would suggest citing the results from Section 5.3 directly to motivate this issue. As for the results themselves in Table 3, the performance of the full model (second row) does not seem consistent. While it performs poorly on Wikitext2, the MMLU performance remains comparable with the SMoL baseline, and seems to improve when pruning more (24 to 20) as opposed to all other methods. It is not clear how these results lead to the conclusion that there is a "gradient conflict". How would the same experiments look when using the Vicuna model? and how would the other evaluation metrics look?
- Using dynamic programming in the context of neural network pruning is not new (https://arxiv.org/pdf/2308.10438, https://ieeexplore.ieee.org/document/9116287 to name a few). The authors should cite and discuss these works, even if they do not tackle LLMs specifically.
- In line 199, the authors claim that the DP complexity is O(MN). This is a bit misleading, as it ignores the complexity of computing P(n,m) which I argue is also dependent on M and N. Furthermore, the actual run-time of the DP step is not discussed, is it too negligible?
- In Table 1, comparing with AmoebaLLM$^\dagger$ is not a fair as it is trained for longer (on top of the training performed in regular AmoebaLLM). How will the other methods compare with the same amount of training?
- It would improve reading clarity if the authors explicitly mention how many layers do the models being studied contain, as simply mentioning how many layers are being pruned in depth shrinkage is hard to gauge without that context. This should also be mentioned in the Tables.
Technical Quality: 3
Clarity: 3
Questions for Authors: - AmoebaLLM produces different subnets, some may have the same parameter count due to different width/depth shrinkage ratios, but it is not clear how a model is selected for a certain experiment (e.g. Fig 3). Is it simply enumerating all the subnets and picking the fastest one? Are the profiling results obtained from Section 3 being used for this, or is that purely for motivating the problem?
- What ranks are used for LoRA?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty and interesting insights of our work, as well as for your constructive comments! We have addressed all your concerns below.
**1. Cite the results from Section 5.3 to motivate the issue of full model fine-tuning**
Thank you for the suggestion! We believe what you suggested is a more direct way to motivate the adoption of adapters and will follow your recommendation in the final version.
---
**2. Clarify the inconsistent results of Table 3**
Thanks for the interesting question! The reasons for the comparable MMLU performance, which corresponds to the understanding of factual knowledge, between full model fine-tuning and other parameter-efficient settings are two-fold:
*(1)* As highlighted and analyzed in Section 5.4 of our manuscript, as well as echoed by recent observations regarding the role of LLM fine-tuning [1][2][3], new factual knowledge can hardly be gained during fine-tuning, and the loss of factual knowledge during compression is hard to restore during fine-tuning. As such, the MMLU performance is mainly determined by the selection of subnets, which is the same across settings in Table 3 of our manuscript, thus making the MMLU accuracy relatively comparable.
*(2)* Following lm-evaluation-harness ([69] cited in our manuscript), the MMLU accuracy is calculated by picking the largest logit among the four logits corresponding to A/B/C/D, instead of all logits. As such, even if the model performs poorly in language modeling on the MMLU dataset, it still has chances to maintain comparable accuracy. To validate this, we further report the PPL achieved on the MMLU dataset, where the target texts are constructed by question plus correct answer contents. As shown in the table below, which is an extended version from Table 3 of our manuscript, we can observe that: Similar to the PPL on Wikitext2, full model fine-tuning suffers from a notable PPL increase on the MMLU dataset, indicating its failure in language modeling on the MMLU dataset.
| Method | 32 | | | 24 | | | 20 | | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) |
| Per-subnet ft. | 5.54 | 20.62 | 46.4 | 10.57 | 45.59 | 41.9 | 15.94 | 65.9 | 41.7 |
| - SMoL (+full model) | 5.82 | 27.02 | 46.6 | 38.48 | 291.11 | 32.6 | 167.74 | 1349.17 | 36.5 |
| - SMoL (+LoRA) | 6.97 | 32.75 | 40.6 | 12.71 | 81.9 | 40.0 | 19.12 | 122.67 | 37.9 |
| - Loss-mag. Balancing | 6.77 | 26.34 | 42.0 | 12.63 | 59.81 | 40.1 | 18.19 | 91.07 | 39.3 |
| **Full** | **6.36** | **25.02** | **47.2** | **12.40** | **55.8** | **45.1** | **18.15** | **84.27** | **41.0** |
Following your suggestion, we have also provided the benchmark with the full model fine-tuning variant of our method on Vicuna 7B below. The results are consistent with previous observations: (1) full model fine-tuning leads to poor language modeling capabilities across both datasets, and (2) the MMLU accuracy of full model fine-tuning does not suffer a very drastic drop but is notably worse than our method.
| | 32 | | | 24 | | | 20 | | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Method | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) | Wikitext2 (PPL) | MMLU (PPL) | MMLU (Acc %) |
| Full Model Finetuning | 7.29 | 25.52 | 46.9 | 40.7 | 164.67 | 39.3 | 134.07 | 882.82 | 37.1 |
| Ours | 6.85 | 23.27 | 47.9 | 11.87 | 49.51 | 46.1 | 14.77 | 67.73 | 45.4 |
[1] “Does fine-tuning llms on new knowledge encourage hallucinations?”, Z. Gekhman et al., arXiv’24.
[2] “LIMA: Less Is More for Alignment”, C. Zhou et al., NeurIPS’23.
[3] “R-Tuning: Instructing Large Language Models to Say ‘I Don’t Know’”, H. Zhang et al., NAACL’24.
---
**3. Missing citations**
Thank you for providing these related works! We will cite and comment on them in the final version.
---
**4. The DP complexity and run-time**
Thank you for pointing this out! You are right that the DP complexity should take the varying number of layers involved in evaluating *P(n, m)* into consideration. The updated DP complexity is *O(MN(M-N))*.
For the run-time of our DP algorithm, when applying it to LLaMA2 7B on an NVIDIA A5000 GPU in our case, where N=32 and M=16, the total consumed time is 1 hour.
We will add the above information in our final version.
---
Rebuttal 2:
Title: Author Response - Part 2
Comment: **5. Benchmark with baselines with the same amount of training time**
We first clarify that in Table 1, we follow a standard criterion in the literature, i.e., ensuring the same number of fine-tuning tokens across both the baselines and our method.
Following your suggestion, we further train the baselines with the same amount of training time as our method, which corresponds to 20k training iterations on Alpaca-gpt4. As shown in the table below, we report the MMLU accuracy as well as the average accuracy across 7 tasks, following the task list of Table 1. We observe that (1) the baselines fine-tuned with 20k iterations generally maintain comparable performance with those trained with 10k iterations, potentially because 10k iterations is sufficient for fine-tuning on Alpaca-gpt4; and (2) our method still outperforms all baselines across various tasks.
| **Remaining Ratio** | **Method** | **Training Iterations** | **Training Time** | **MMLU (%)** | **Average Acc (%)** |
|:---:|:---:|:---:|:---:|:---:|:---:|
| 80% | FLAP | 10k | 15h | 40.21 | 60.98 |
| | | 20k | 30h | 38.62 | 61.40 |
| | Shortened LLaMA | 10k | 15h | 26.45 | 58.72 |
| | | 20k | 30h | 26.55 | 59.63 |
| | Ours | 10k | 30h | **40.70** | **62.29** |
| 65% | FLAP | 10k | 15h | 33.28 | 56.12 |
| | | 20k | 30h | 35.4 | 55.93 |
| | Shortened LLaMA | 10k | 15h | 24.89 | 52.57 |
| | | 20k | 30h | 24.70 | 53.95 |
| | Ours | 10k | 30h | **36.00** | **56.96** |
| 50% | FLAP | 10k | 15h | 27.67 | 51.12 |
| | | 20k | 30h | 27.55 | 51.32 |
| | Shortened LLaMA | 10k | 15h | 24.76 | 47.35 |
| | | 20k | 30h | 25.10 | 49.22 |
| | Ours | 10k | 30h | **30.60** | **52.19** |
**6. The number of remained depth and width in Table 1**
Thank you for the suggestion! The (depth, width scale) for AmoebaLLM with 80%/65%/50% remaining ratios in Table 1 are (30, 0.875)/(28, 0.75)/(22, 0.75), respectively. We will follow your suggestion to add this information to Table 1 to improve readability.
---
**7. The detailed subnet selection strategy**
The profiling results in Section 3 are purely for motivating the problem. Here is our current subnet selection strategy: we adopt a hierarchical search strategy to deliver subnets from our design space that satisfy the target efficiency constraint, e.g., 50% weight remaining ratios in Table 1 of our manuscript, while maximizing the achievable accuracy. Specifically, we first perform a coarse grid search across uniformly spaced depth and width settings based on a small calibration set, e.g., 20 samples from the MMLU dataset, to identify the subnets that satisfy the given efficiency constraint with maximized accuracy. Next, we perform a more fine-grained grid search within depth/width ranges surrounding the optimal subnet identified in the coarse grid search stage. This process typically evaluates 40 promising subnets and takes no more than 10 minutes on an NVIDIA A5000 GPU.
We empirically find that the above strategy works well at the scale of our target problem and can already deliver high-quality subnets that outperform previous compression methods. We also note that more complex subnet selection strategies, such as the evolutionary search adopted by [11]-[15] cited in our manuscript, can also be employed, which will be our future work.
We will clarify this in the final version.
---
**8. The rank of LoRA**
We follow QLoRA [1] and adopt a rank of 64. We will add this information to the final version.
[1] “QLoRA: Efficient Finetuning of Quantized LLMs”, T. Dettmers et al., NeurIPS’23.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response. The authors have addressed most of my concerns/questions. The additional data/clarifications provided in this rebuttal should be included in the final manuscript as the authors mentioned. I will increase my score (6->7).
---
Reply to Comment 2.1.1:
Comment: Thank you for taking the time to review our rebuttal responses and for providing positive feedback! We are encouraged to hear that our rebuttal has addressed most of your concerns. Following your suggestion, we will include the additional data and clarifications from this rebuttal in our final manuscript.
---
Rebuttal 3:
Comment: Dear Reviewer,
We sincerely appreciate the time you dedicated to providing valuable feedback on our paper. In this author response, we have addressed all of your initial concerns. If you have any further questions or concerns, we are happy to discuss them with you. Additionally, we welcome any new suggestions or comments you may have!
Best,
The Authors of Paper #13852 | Summary: The paper proposed a new framework, named AmoebaLLM, that adapts any LLMs to achieve optimal efficiency across different platforms and applications. In specific, the framework contains two stages. The first stage denoted as a knowledge-preserving stage, creates a subnet of the LLM by dynamic programming given an arbitrary depth or width. This second stage denoted as the one-for-all finetune stage is to fine-tune the obtained subnet to achieve the optimal performance on the given application (dataset). To finetune the subnet, the paper proposed shape-aware LoRAs to adapt the knowledge and a loss-magnitude balancing to ensure efficient distillation in the second stage. All combined, the method achieves SOTA performance among all the pruning methods and obtains the best accuracy in the same latency in different platforms.
Strengths: 1. The target of the paper is practical and interesting. It is indeed a good open problem to implement different LLMs within different constraints of devices and platforms while achieving the optimal trade-off between efficiency and accuracy.
2. This method, the DP-based depth shrinking strategy, could construct an any-shape LLM with reasonable performance, which is promising in different applications.
3. The presentation and writing are good and easy to follow.
Weaknesses: 1. About latency in section 5.2:The work claimed that it achieve the accuracy-efficiency frontier. In terms of accuracy, it is convincing as shown in Table 1. However, for efficiency, it seems only Fig.3 is related to this topic yet only shows a trade-off between latency and accuracy. It is not sufficient to evaluate the efficiency purely based on latency.
2. In my understanding, efficiency should also consider the resources consumed in the pruning method. First, the DP algorithm itself has a high computational complexity. Second, the proposed method is somewhat limited to its one-for-all finetune, which is a training process related to a specific dataset, and a manual selection of the subnet, the method is not perfectly efficient. In this case, the author may consider adjusting their claim in efficiency.
3. The selection of the subnet seems to be related to the profiling of the generation latency. It is purely empirical and may be error-prone when it is generalized to other platforms.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. What is the definition of the latency?
2. Is it possible to provide the details of selecting the subnet shape that favors the hardware characteristics?
3. Which LLM is used across all the experiments?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: No negative societal impact is found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our work as practical and interesting, as well as for your constructive comments! We have addressed all your concerns below.
**1. The definition of latency and more real-device efficiency measurement**
We clarify that the latency used in Fig. 3 of our manuscript is start-up latency, i.e., the end-to-end time used to finish the generation of one sample (batch size = 1, sequence length = 128).
Following your suggestion, to provide a more comprehensive evaluation of inference efficiency, we further measure the start-up latency (batch size = 1) and throughput (batch size = 16) for generating/prefilling a sequence with a length of 128, respectively, using the MLC-LLM framework on an NVIDIA A5000 GPU. We provide the achieved accuracy (averaged over the 7 tasks from Table 1 of our manuscript) and efficiency metrics in the table below. We can observe that our method still consistently achieves the best accuracy-efficiency trade-off.
| | | Generation | | Prefilling | | |
|---|---|:---:|:---:|:---:|:---:|---|
| Remaining Ratio | Method | Start-up latency/s (bs=1) | Throughput (bs=16) | Start-up latency/s (bs=1) | Throughput (bs=16) | Accuracy (%) |
| 80% | Shortened LLaMA | 2.834 | 5.528 | 0.0405 | 48.810 | 58.72 |
| | FLAP | 2.847 | 5.427 | 0.0418 | 45.701 | 60.98 |
| | **Ours** | **2.879** | **5.286** | **0.042** | **46.256** | **62.29** |
| 65% | Shortened LLaMA | 2.308 | 6.785 | 0.0331 | 60.060 | 52.57 |
| | FLAP | 2.372 | 6.376 | 0.0349 | 56.437 | 56.12 |
| | **Ours** | **2.294** | **6.663** | **0.035** | **56.398** | **57.43** |
| 50% | Shortened LLaMA | 1.782 | 8.887 | 0.0259 | 77.295 | 47.35 |
| | FLAP | 1.883 | 8.056 | 0.0275 | 71.942 | 51.12 |
| | **Ours** | **1.860** | **8.124** | **0.0281** | **73.260** | **52.19** |
**2. The complexity of our DP algorithm**
To address your concern, we provide the consumed time of our DP algorithm here. As elaborated in Section 4.2 of our manuscript, when targeting the removal of M layers out of all N layers, the total number of evaluations required by the DP algorithm is M(N-1)/2. For each evaluation, as mentioned in Section 5.4, we measure the MMLU accuracy of 20 samples, which takes about 15 seconds on average on an NVIDIA A5000 GPU for LLaMA2 7B.
As such, when applying our DP algorithm to LLaMA2 7B in our case, where N=32 and M=16, the total consumed time is about 1 hour, which is about 1/15 of the LoRA fine-tuning time of our baselines in Table 1. It is worth noting that constructing the DP table is a one-time effort for each pre-trained LLM model and does not need to be repeated for each target subnet configuration.
---
**3. The overall training efficiency of one-for-all fine-tuning**
First, we clarify that our one-for-all fine-tuning is not dataset-specific; instead, we perform a generic fine-tuning on alpaca-gpt4, as mentioned in Section 5.1 of our manuscript, and then evaluate the subnets with varying shapes derived from the fine-tuned model across different test datasets and tasks, as shown in Table 1 of our manuscript. In addition, the selection of subnets can be automatically performed within 10 minutes, as elaborated in our response to Question 5.
Furthermore, beyond the inference efficiency analyzed in Section 5.2 of our manuscript as well as in Question 1, the advantage of our method in training efficiency is that it requires a constant *O(1)* training time with respect to the number (*N*) of subnets with varying efficiency constraints to be delivered, thanks to the one-for-all fine-tuning. In contrast, previous compression methods require *O(N)* fine-tuning time.
For example, for delivering 10 subnets with varying weight remaining ratios in Table 1 of our manuscript, when fine-tuned on the same number of tokens, all baseline methods require about 150 GPU hours in total on an NVIDIA A5000 GPU, while our method requires only 30 GPU hours.
We will follow your suggestion to emphasize our achieved trade-off between accuracy and inference efficiency, while also clarifying and highlighting our claim regarding our method’s advantage in training efficiency from the above perspective.
---
**4. The subnet selection may be error-prone when generalized to various platforms**
We agree that determining the subnet shape is a general problem for hardware-aware compression methods, which can be error-prone and not generalizable across platforms. This issue originates from the fact that theoretical FLOPs cannot always reflect real-device efficiency.
However, our method mitigates this issue by allowing for rapid subnet determination for each target platform, thanks to its delivered one-for-all fine-tuned LLM. Specifically, for each target platform, users can select efficient subnets based on platform-specific profiling and directly obtain the accuracy of these selected subnets from the one-for-all fine-tuned LLM, without the need for per-subnet fine-tuning. This improved training efficiency enables users to quickly identify subnets that hit the Pareto frontier of accuracy-efficiency trade-offs, thus obtaining high-quality subnets efficiently.
---
Rebuttal 2:
Title: Author Response - Part 2
Comment: **5. Detailed subnet selection strategy**
Thank you for pointing this out! Currently, we adopt a hierarchical search strategy to deliver subnets from our design space that satisfy the target efficiency constraint, e.g., 50% weight remaining ratios in Table 1 of our manuscript, while maximizing the achievable accuracy. Specifically, we first perform a coarse grid search across uniformly spaced depth and width settings based on a small calibration set, e.g., 20 samples from the MMLU dataset, to identify the subnets that satisfy the given efficiency constraint with maximized accuracy. Next, we perform a more fine-grained grid search within depth/width ranges surrounding the optimal subnet identified in the coarse grid search stage. This process typically evaluates 40 promising subnets and takes no more than 10 minutes on an NVIDIA A5000 GPU.
We empirically find that the above strategy works well at the scale of our target problem and can already deliver high-quality subnets that outperform previous compression methods. We also note that more complex subnet selection strategies, such as the evolutionary search adopted by [11]-[15] cited in our manuscript, can also be employed, which will be our future work.
We will follow your suggestion and our promise in the abstract to open source all source code and the delivered subnets upon acceptance.
---
**6. Which LLMs are used**
As mentioned in Section 5.1 of our manuscript, we employ LLaMA 2 7B and Vicuna 7B v1.5.
---
Rebuttal 3:
Comment: Dear Reviewer,
We sincerely appreciate the time you dedicated to providing valuable feedback on our paper. In this author response, we have addressed all of your initial concerns. If you have any further questions or concerns, we are happy to discuss them with you. Additionally, we welcome any new suggestions or comments you may have!
Best,
The Authors of Paper #13852
---
Rebuttal Comment 3.1:
Comment: Thank you for your detailed response. My concerns have largely been addressed, and after considering the feedback from other reviewers, I am inclined to support the acceptance of this paper. I have a few additional suggestions:
1. The table provided in the first response should be included in the main paper. It would strengthen the argument regarding the “Frontier of accuracy and efficiency,”, as particularly the method demonstrates state-of-the-art performance across various efficiency metrics.
2. It would be beneficial to include a discussion of the complexity of the DP algorithm and its corresponding running time in the main paper. This addition could alleviate any concerns readers might have about the
---
Reply to Comment 3.1.1:
Comment: Thank you for taking the time to review our rebuttal responses and for providing positive feedback! We are encouraged to hear that our rebuttal has addressed most of your concerns.
Following your suggestion, we will include (1) the real-device efficiency table and (2) the complexity and runtime of our DP algorithm, both in our first response, in the final version of our manuscript to strengthen the coherence of our work. | Summary: To address the problems of diverse resource constraints and deployment flows while using LLM for multiple real-world applications, this paper proposes an AmoebaLLM, featuring a knowledge-preserving subnet selection strategy, a shape-aware mixture of LoRAs and a distillation scheme with loss-magnitude balancing.
Strengths: 1. The paper is well-organized.
2. The method achieves SOTA performance on most metrics.
3. The experimental setup is easy to follow.
Weaknesses: Overall, the motivation and ideas of this paper are commendable. However, I have a few concerns:
1. DP-based depth shrinking aims to select suitable layers that can achieve the optimal target metric. Does this mean that the proposed method needs to evaluate the performance for each selection strategy? If so, how long is required for each evaluation? What is the relationship between the number of layers and the overall search time?
2. Another concern is about the optimal subnet. I wonder if the optimal subnet identified in the first phase is truly “optimal.” I believe that the selected subnet is the optimal choice for the initial parameters. However, the proposed method consists of two stages. After the fine-tuning stage, some subnets that were not optimal in the first phase might achieve the best target metrics. I understand that searching subnets with fine-tuning is very expensive, but have the authors considered this issue? Is this correct?
3. I am particularly interested in the training efficiency comparisons, as the authors mentioned that existing methods are not efficient. However, the experimental section does not show such comparisons. Could the authors provide relevant comparisons in Table 1 and Table 2?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses part.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the idea and performance of our work, as well as for your constructive comments! We have addressed all your concerns below.
**1. Whether our DP-based depth shrinking strategy needs to evaluate the performance for each selection strategy and its overhead**
Yes, you are right. Our DP-based strategy needs to evaluate the performance for each selection strategy, as described in Eq. (2) of our manuscript. Specifically, as mentioned in Section 5.4 of our manuscript, we measure the MMLU accuracy of 20 samples for each evaluation, which takes about 15 seconds on average on an NVIDIA A5000 GPU for LLaMA2 7B.
The overall search time is proportional to the total number of layers. Specifically, when targeting the removal of M layers out of all N layers, the total number of evaluations is M(N-1)/2. For LLaMA2 7B in our case, where N=32 and M=16, the overall search time is about 1 hour. It is worth noting that constructing the DP table is a one-time effort for each pre-trained LLM model and does not need to be repeated for each target subnet configuration.
We will add this analysis to the final version.
---
**2. Whether the optimal subnets change before and after fine-tuning and the potential of searching subnets with fine-tuning**
Thanks for the insightful question! First of all, we agree that under different subnet selection criteria, the selection of optimal subnets may vary before and after fine-tuning, making smarter strategies like iterative subnet selection and one-for-all fine-tuning promising.
Second, under our subnet selection criteria, i.e., the encoded factual knowledge measured by MMLU accuracy as elaborated in Section 5.4 of our manuscript, we observe a high consistency between the optimal subnets before and after fine-tuning. Specifically, this is because, according to observations in recent works [1][2][3], new factual knowledge can hardly be gained during finetuning, and the loss of factual knowledge during compression is hard to restore during finetuning. As such, the selection of optimal subnets stays consistent when using MMLU accuracy as a subnet selection indicator. This motivates us to adopt a simple-yet-effective two-stage strategy in our AmoebaLLM framework.
[1] “Does fine-tuning llms on new knowledge encourage hallucinations?”, Z. Gekhman et al., arXiv’24.
[2] “LIMA: Less Is More for Alignment”, C. Zhou et al., NeurIPS’23.
[3] “R-Tuning: Instructing Large Language Models to Say ‘I Don’t Know’”, H. Zhang et al., NAACL’24.
---
**3. Training efficiency analysis**
Thanks for constructive comments and questions! Generally, the advantage of our method in training efficiency is that it requires a constant *O(1)* training time with respect to the number (*N*) of subnets with varying efficiency constraints to be delivered, thanks to the one-for-all finetuning. In contrast, previous compression methods require *O(N)* fine-tuning time.
For example, for delivering 3 subnets with varying weight remaining ratios in Table 1 of our manuscript, when fine-tuned on the same number of tokens, all baseline methods require about 45 GPU hours in total on an NVIDIA A5000 GPU, while our method requires 30 GPU hours. Similarly, when delivering 10 subnets, all baseline methods require about 150 GPU hours, while our method still requires only 30 GPU hours, thanks to its constant training complexity.
We will add the above analysis to the final version.
---
Rebuttal 2:
Comment: Dear Reviewer,
We sincerely appreciate the time you dedicated to providing valuable feedback on our paper. In this author response, we have addressed all of your initial concerns. If you have any further questions or concerns, we are happy to discuss them with you. Additionally, we welcome any new suggestions or comments you may have!
Best,
The Authors of Paper #13852
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply. I will maintain my score.
---
Reply to Comment 2.1.1:
Comment: Thank you for taking the time to review our rebuttal responses and provide feedback! Any further suggestions or comments you have are welcome! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models | Accept (poster) | Summary: The paper introduces Diffusion of Thought (DoT), integrating diffusion models with Chain-of-Thought. The paper proposes two training-time sampling strategies to enhance self-correction during inference. Experimental results demonstrate the effectiveness of DoT in simple and complex reasoning tasks.
Strengths: - The paper presents an initial exploration into the reasoning ability of current diffusion language models.
- In simple reasoning tasks, DoT achieves up to 27× speed-up without performance drop compared to CoT and implicit CoT.
- DoT showcases promising self-correction abilities in complex reasoning problems.
Weaknesses: - The coupled sampling strategy, designed to rectify errors in previous thoughts, appears to assume that the noise added to the rationale $r_{i-k}, \cdots, r_{i-1}$ is the same as the potential errors in previous rationales during inference. This assumption is not intuitively obvious and lacks a clear explanation.
- Despite building upon the DiffuSeq framework, the paper does not include comparisons with the DiffuSeq model, which has the most similar model backbone.
- It would be better if the paper could provide a qualitative comparison of the reasoning paths between DoT and DoT$^{MP}$
Technical Quality: 3
Clarity: 3
Questions for Authors: - L128-129 is confusing, and it is unclear how Table 2 supports the statement that “the gradient-based token guidance fails to do accurate conditioning as the model cannot exactly recover each conditioning token”.
- L153-154, it says that the model “mimics the inference stage with probability $\epsilon_i$”, and “$\epsilon_i$ linearly decays from 1 to $\epsilon_{min}$”. However, this suggests that the model utilizes $\hat{z}_o = z_θ(z_u;u)$ at the start of training. Given this setup, when training from scratch, would there be concerns that $z_θ()$ would fail to predict a meaningful $\hat{z}_0$ at the beginning of training.?
- How does the model decide on the number of rationales in the multi-pass DoT?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has adequately stated the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer q6Ny for the review and are grateful for the time you spent with our submission. We wish to address your confusion and concerns by providing detailed responses to each of your comments.
**Weakness 1: Confusion about the coupled sampling strategy**
Thanks for pointing out this potential confusion. The main purpose of the coupled sampling mechanism is to enable the DoT$^{MP}$ model with the ability to correct potential errors in the previous thoughts. Without coupled sampling, the discrepancy arises between the use of correct previous thoughts during training and the possibly erroneous generated thoughts during testing, leading to error accumulation akin to that in autoregressive models. To alleviate this issue, we introduce the coupled sampling strategy for the model to learn to rectify past errors. This strategy involves injecting noise into previous thoughts during the training phase, enabling the model to possess the capability to see and correct errors in previous thoughts. We will add more details to clarify this confusion in the final version.
**Weakness 2: Comparisons with the DiffuSeq model**
Thank you for bringing up this point. We have this comparison in Table 2 and here is the summary of the comparison between DiffuSeq and DoT. We will clarify this confusion in the final version.
| | Accuracy |
|------------------|----------|
| Plaid + DiffuSeq | 31.2 |
| Plaid + DoT | 32.6 |
| Plaid + DoT$^{MP}$ | 37.7 |
**Weakness 3: Qualitative comparison of the reasoning paths between DoT and DoT$^{MP}$**
Thank you for your suggestion. We observe that DoT$^{MP}$ outperforms DoT in correctness regarding the reasoning paths, while DoT slightly excels in diversity as depicted in Figure 4(b). Below we show some examples where DoT$^{MP}$ can predict the correct reasoning path while DoT fails. More content related to the reasoning path analysis will be incorporated in the paper accordingly.
>*Query*: The Kennel house keeps 3 German Shepherds and 2 Bulldogs. If a German Shepherd consumes 5 kilograms of dog food and a bulldog consumes 3 kilograms of dog food per day. How many kilograms of dog food will they need in a week?
>
>*DoT*: <<3*5=15>> <<7*3=21>> <<15+21=36>> #### 36
>
>*DoT$^{MP}$*: <<3*5=15>> <<2*3=6>> <<15+6=21>> <<21*7=147>> #### 147
> *Query*: Skyler has 100 hats on his hand with the colors red, blue, and white. Half of the hats are red, 3/5 of the remaining hats are blue, and the rest are white. How many white hats does Skyler have?
>
>*DoT*: <<1/2*100=50>> <<3/5*50=30>> <<100-30=70>> #### 70
>
>*DoT$^{MP}$*: <<100/2=50>> <<100-50=50>> <<50*3/5=30>> <<50-30=20>> #### 20
**Question 1: Confusion about L128-129**
Thanks for pointing out this potential confusion. The first line of Table 2 is the tuned Plaid using the gradient-based token guidance to generate response and it achieves poor performance. In Plaid, the use of gradient-based guidance to inject conditions involves adjusting random source embeddings through gradients to match condition tokens. However, we observe discrepancies between recovered source tokens and condition tokens, which can adversely affect tasks requiring precise conditioning. Below we show an example on grade school math as a demonstration, where **bold** words in the query part are incorrectly recovered. We can see there are three recovered query tokens that exhibit minor differences due to soft gradient guidance, causing interference with the model's comprehension of the problem. That’s why we resort to hard control with gradient-free conditioning. We will add more details to clarify this confusion in the final version.
> *Groundtruth*: Two trains leave San Rafael at the same time. They begin traveling westward, both traveling for 80 miles. The next day, they travel northwards, covering 150 miles. What's the distance covered by each train in the two days? <<2*80=160>> <<150*2=300>> <<300+160=460>> <<460/2=230>> #### 230
>
> *Prediction*: **Three** trains leave San Juan at the same time. They **start** traveling westward, both traveling for 80 miles. The next day, they travel **southward**, covering 150 miles. What's the distance covered by each train in the two days? <<3*80=180>> <<180+80+150=340>> <<340/ 30=12.5>> #### 12.5
**Question 2: About probability $\epsilon$**
Thank you for bringing up this excellent question. On one hand, we actually desire the presence of noise in $z_0$, as it forces the model to pose the self-correction ability. On the other hand, due to our small $\epsilon_\text{min}$, i.e., 0.95, we primarily rely on gold data, thereby preventing excessive noise in $z_0$ that could make training too challenging. We previously attempted a warmup process by integrating scheduled sampling after a certain number of steps, but we did not observe significant performance improvements. Hence, for the sake of simplicity, we opted for the current approach.
**Question 3: How does the model decide on the number of rationales**
Thank you for bringing up this point. During training, we append a special token <EOS> to the last thought, so when the model generates a thought followed by <EOS>, it stops generating further. We will add this detail to our manuscript.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer q6Ny,
Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments two days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concerns if there are any. If you have any further questions, we are happy to discuss them!
Best regards,
Authors | Summary: This paper introduces "Diffusion of Thought" (DoT) to diffusion language models to improve upon their reasoning capabilities.
The method adapts the implicit chain of thought methodology (iCoT) for autoregressive models, which relies on per-task fine-tuning to distill reasoning into transformer layers, while DoT encodes it into diffusion steps.
The methodology includes a comparison between a single-pass and a multi-pass approach. The single-pass averages all rationales across all timesteps, while the multi-pass introduces a causal inductive bias between rationales by averaging each reasoning step at a time across all timesteps.
Similar to the iCoT paper, evaluations are conducted on tasks lmultiplication, boolean logic, and the grade school math (GSM8K) tasks.
The approach leverages self-correction, self-consistency, and the number of reasoning steps (T) to further improve accuracy, trading off some efficiency.
Strengths: - The fundamental idea of encoding reasoning rationales into diffusion steps seems an intuitive path to explore.
- Due to the flexible timestep parameter (T), DoT offers greater flexibility compared to Implicit Chain of Thought (iCoT), which is limited by the number of transformer layers.
Weaknesses: - **Direct Comparison Baseline** The paper lacks a direct comparison with answer-only and traditional CoT technique applied to diffusion language models, which would provide a clearer benchmark for evaluating the effectiveness of DoT. The paper only provides a comparison with auto-regressive answer-only, CoT and iCoT results. This does not convincingly demonstrate that the additional complexity introduced is justified by performance improvements.
- The paper does not adequately separate between the specific contributions of the DoT methodology from the inherent advantages of using diffusion language models.
- **Missing iCoT context** Section 3 does not clearly explain how DoT builds on the implicit Chain-of-Thought (iCoT) approach, especially regarding the training operations. Instead, it focuses mainly on additional complexities and mechanisms introduced to improve overall results. A detailed connection between iCoT and DoT is needed to better understand the modifications and their impact. The terms 'single-pass' and 'multi-pass' could be misleading as they typically imply batch processing. Here, they refer to how probabilities of different reasoning paths are handled, in parallel or sequentially.
- **Task-Specific Fine-Tuning Requirement** DoT performs well on simple tasks like multiplication but requires fine-tuning with a larger number of reasoning steps t for grade school math. This contrasts with CoT methods in autoregressive models, which can adapt more flexibly using examples directly in the input.
- **Throughput Comparison** The absence of a direct throughput comparison for fixed **T** across evaluation settings limits understanding of T's impact on performance and efficiency. The results in Table 1 summarized results for dynamic sampling timesteps T.
Technical Quality: 2
Clarity: 3
Questions for Authors: Could the authors clarify why they chose to compare DoT with answer-only, CoT, and iCoT for autoregressive models, but did not include similar comparisons with answer-only and CoT for diffusion language models as well?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes, the authors acknowledge the reliance on specialized training per reasoning task and the limited generalization capabilities.
The need for more reasoning steps as tasks become more complex could be further elaborated upon.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Y1KX for your review and are grateful for the time you spent on our submission. Below we would like to give detailed responses to each of your comments.
**Weakness 1: Direct Comparison Baseline**
Thank you for your suggestion. We conduct the answer-only setting to further validate the effectiveness of DoT. The result table reveals that fine-tuning diffusion models solely with answer data leads to inferior performance compared to DoT, mirroring the degradation of AR models in the absence of CoT.
| | Accuracy |
|-------------------|----------|
| GPT-2 Answer-only | 17.0 |
| GPT-2 CoT | 43.9 |
| Plaid Answer-only | 12.4 |
| Plaid DoT | 37.7 |
| SEDD Answer-only | 29.1 |
| SEDD DoT | 45.7 |
> "The paper does not adequately separate between the specific contributions of the DoT methodology from the inherent advantages of using diffusion language models."
Thank you for bringing up this point. Firstly, one of our contributions is exactly employing diffusion language models for multi-step text reasoning. To the best of our knowledge, we are the first to bring diffusion into the realm of complex text reasoning such as mathematical reasoning. Additionally, we show fine-tuning a pretrained diffusion model is a non-trivial work. As demonstrated in the ablation study (Table 2), directly following the pretraining approach leads to subpar results. Therefore, we resorted to an infilling approach and further proposed a series of sampling strategies and multipass variants to enhance the model's performance. The comparison between applying CoT and DoT to diffusion is presented below.
| | Accuracy |
|-------------------|----------|
| Plaid CoT (gradient-based guidance) | 0.5 |
| Plaid CoT | 31.2 |
| Plaid DoT$^{MP}$ | 37.7 |
**Weakness 2: Missing iCoT context**
Thank you for your suggestion about more description about iCoT. In Section 3.1, we list 3 parallel approaches of modeling CoT: AR, iCoT, DoT, theoretically. Below, we discuss some similarities and differences between iCoT and DoT. DoT shares similarities with iCoT in the following 3 high-level aspects: i) Both DoT and iCoT try to tackle the time cost of auto-regressively generating the chain-of-thought rationales; ii) Both DoT and iCoT process “thoughts” “vertically’’ in hidden dimension. But DoT presents the hidden information across different diffusion timesteps (in temporal dimension), while iCoT presents the hidden information across the model’s different layers (in spatial dimension); iii) Both DoT and iCoT evaluate the model’s CoT ability, so we refer to some experiment settings of iCoT, including datasets and baselines.
However, in terms of methodology, iCoT and DoT are completely different. iCoT still relies on next-token prediction for autoregressive generation, while DoT utilizes diffusion, which offers additional advantages such as a flexible performance-efficiency trade-off and self-correction capability beyond efficiency. We will add more clarification about DoT and iCoT in the paper.
Thanks for pointing out the potential confusion about the names of ‘single pass’ and ‘multi-pass’, we will clarify them in the paper.
**Weakness 3: Task-Specific Fine-Tuning Requirement**
Thank you for your constructive comments. The main reason is that the current pre-trained diffusion language models are relatively small, resulting in the underutilization of their in-context learning capabilities, as we discussed in the limitations. Exploring in-context learning for text diffusion is another interesting topic.
**Weakness 4: Throughput Comparison**
Thank you for your suggestion. In Appendix L713-L714, we provide a detailed description on T for in Table 1. Specifically, we utilize T = 1 for digit multiplication, T = 2 for the boolean logic dataset, and T = 64 for grade school math. It is worth noting that adjusting the parameter T itself is an advantage of DoT, as it allows us to allocate computational resources more efficiently to challenging tasks, while using a smaller T for simpler tasks. We show how T affects performance on grade school math in Figure 3, and below we also show how T affects throughput for Plaid DoT$^{MP}$. The relationship between throughput and T appears to be nearly linear.
| T | Accuracy | Throughput |
|-----|----------|------------|
| 1 | 18.18 | 6.6 |
| 2 | 35.9 | 3.4 |
| 4 | 36.7 | 1.7 |
| 8 | 36.4 | 0.9 |
| 16 | 36.1 | 0.4 |
| 32 | 37.4 | 0.2 |
| 64 | 37.7 | 0.1 |
| 128 | 37.7 | 0.05 |
**Q1: comparisons with answer-only and CoT for diffusion language models**
Please see the response for weakness 1.
**Limitation: The need for more reasoning steps as tasks become more complex could be further elaborated upon.**
Thank you for your suggestion. The main point of the CoT paper is to improve the reasoning ability by involving more intermediate steps. In DoT, we also observed that more diffusion timesteps (computing FLOPs) yield better results. [1] also presents a similar idea, and this topic is worthy of further investigation.
[1] Pondernet: Learning to ponder. ICML 2021.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Y1KX,
Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments two days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concerns if there are any. If you have any further questions, we are happy to discuss them!
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed rebuttal and for providing the necessary comparisons. These results should have been included in the initial submission to support the narrative of the paper. Given these updates, I will increase the current score to 5.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer Y1KX,
Thank you for your approval of our work. We sincerely appreciate your suggestions to enhance the rigor of our paper. We would be happy to do any follow-up discussion or address any additional comments.
Best regards,
Authors | Summary: The authors propose a chain-of-thought technique for diffusion language models. They achieve this by diffusing a set of hidden representations (thoughts) through time. Different sampling techniques are introduced to enhance error recovery including looking forward and conditioning on multiple previous thought steps in predicting the current thought. They achieve competitive results in terms of throughput compared to chain-of-thought paradigms applied to small language models.
Strengths: - The authors extend the chain-of-thought paradigm to language diffusion models, which is novel and significant.
- Their results seem to suggest that this is a promising direction.
Weaknesses: - The presentation can be enhanced:
- The transparent figure colors are very hard to read.
- The figures do not render correctly on different PDF viewers.
- Comparison to larger open language models (e.g., Lama) would improve this contribution's placement in the literature.
Technical Quality: 3
Clarity: 2
Questions for Authors: I believe that in line 155, the first word should be "future" instead of "former," and the last word should be "backward" instead of "forward." Is this a typo or a misunderstanding on my part?
- Additionally, I think a comparison to the paradigm in [1] could be informative.
[1] Harvey W, Wood F. Visual chain-of-thought diffusion models. arXiv preprint arXiv:2303.16187. 2023 Mar 28.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 5GKr for your review and are grateful for the time you spent on our submission. We are also glad you think our paper is novel and significant. Below we would like to give detailed responses to each of your comments.
**Weakness 1: The presentation regarding color and figure rendering can be enhanced**
Thank you for bringing to our attention the potential confusion regarding color and figure rendering. We will address and clarify this issue in the final version of the paper.
**Weakness 2: Comparison to larger open language models (e.g., Llama)**
Thank you for your suggestion. We add the results for (LoRA) fine-tuning LLMs on the same dataset, listed in the following table. Please note that the current diffusion pretrained model is much smaller than Llama 7B, so this comparison is not fair and we just list them for reference. We have validated that our DoT is better than the same scale autoregressive model GPT-2 (Table 1), which shares the similar architecture with Llama. We believe that further exploration of diffusion language models will lead to larger models that can compete with current LLMs, allowing DoT to achieve results more comparable to Llama.
| | Params | Accuracy |
|-----------------|--------|----------|
| GPT-2 CoT | 355M | 43.9 |
| Mistral CoT | 7B | 68.8 |
| Llama CoT | 7B | 59.0 |
| SEDD DoT (Ours) | 424M | 45.7 |
**Q1: About the confusion in line 155**
Thank you for bringing up this question. For the first “former”: in the inference stage of diffusion, the timestep t starts from T and progressively decreases to 1, so we refer the larger time $u$ to “former” steps. Regarding two “forward” words in this line, this term carries different meanings: the first “forward” refers to the forward process in diffusion, which involves adding noise to data, while the last “forward” denotes the forward pass of the model, in contrast to the backward gradient-backpropagation pass. We will avoid using the same word with different meanings to prevent misunderstandings.
**Q2: A comparison to the paradigm in [1] could be informative.**
Thank you for sharing this paper ‘Visual CoT of diffusion models’. This paper borrows the idea of CoT in LLMs which involves intermediate steps to improve the performance. Their model acts in two steps: first generate the CLIP embedding and then generate the final image. This paper and our DoT both mentioned the CoT in diffusion models, but there is a big difference: the mentioned paper only borrows the idea of CoT but cannot perform the CoT reasoning, while DoT focuses on the reasoning ability of text models, as the alternative to autoregressive CoT in LLMs. We will add this comparison in related work.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 5GKr,
Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments two days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concerns if there are any. If you have any further questions, we are happy to discuss them!
Best regards,
Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5GKr,
Thank you for your valuable time to review our work and constructive feedback. As the discussion period draws to a close, we would appreciate it if you could kindly take a look at our response to your comments. If you have any further questions, we are happy to discuss them!
Thanks very much!
Best regards,
Authors | Summary: The work introduces Diffusion-of-Thought (DoT), a method that combines diffusion language models with the Chain-of-Thought technique to enhance their reasoning ability. DoT uses the flexibility of diffusion processes to allow reasoning steps to diffuse over time, improving performance in several mathematical tasks, and demonstrating its self-correction abilities. The experimental results show DoT's effectiveness in many tasks.
Strengths: - The work deals with an important problem in ML, verifying reasoning ability on a recently arisen diffusion language model.
- The proposed method is technically sound.
- The experiments show DoT's empirical effectiveness on many math benchmarks.
Weaknesses: - Some of the recent work is not discussed [1]
- No standard deviation or confidence interval in the results.
---
[1] Can mamba learn how to learn? a comparative study on in-context learning tasks, ICML 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: - In Figure 2, can you represent the rationale example in the DoT chart in a similar way to the Problem-solving tasks chart on the left (like 2+1=3 in the grey box)? What exactly is the rationale for the DoT chart?
- Is the performance improvement attributed to enhanced reasoning? Could it simply be due to fine-tuning? It would be beneficial to compare the results with a method that has been fine-tuned without using DoT.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - Limited ablation study
- General performance improvement beyond mathematical tasks are not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer E7FJ for your review and are grateful for the time you spent on our submission. We're pleased you find our method effective. Below, we provide a point-by-point rebuttal to clarify your concerns.
**Weakness 1: Discussion of recent work**
Thank you for sharing this paper *Can mamba learn how to learn?*. We noticed that there are several differences between this paper and DoT. First, mamba is a new model architecture as an alternative to traditional Transformers with full attention. For our diffusion models, we currently use traditional Transformers architecture, and it is orthogonal to the design of mamba. It would be interesting to see DoT’s performance with mamba as the base model. Second, the mamba paper mainly discusses the ability of in-context-learning, while our experiment setting focuses on the chain-of-thought reasoning. In all experiments except for few-shot ChatGPT baselines, we didn’t use in-context demonstrations. Exploring ICL ability of diffusion models is another interesting topic.
**Weakness 2: Standard deviation**
Thank you for bringing up this point. All experimental results were obtained by averaging the results of 3 separate trials, with a confidence interval of p < 0.01. Experimental results also reveal significant disparities in accuracy among different models.
**Question 1: Rationale example in Figure 2**
Thank you for bringing up this question. In the problem-solving tasks chart, we have two rationales and one final answer, 3 CoT steps in total `<<2/2=1>><<2+1=3>>####3`. For AR models, it will generate each token **one by one**. For single-pass DoT, it will generate the whole CoT steps in parallel: `<<2/2=1>><<2+1=3>>####3` (Table 3). For multi-pass DoT, it will generate `<<2/2=1>>` first in parallel, then `<<2+1=3>>`, and then `####3`.
**Question 2: Comparison with no-DoT finetune**
Thank you for your suggestion. We conduct the answer-only setting to further validate the effectiveness of DoT. The result table reveals that fine-tuning diffusion models solely with answer data leads to inferior performance compared to DoT, mirroring the degradation of AR models in the absence of CoT.
| | Accuracy |
|------------------|----------|
|GPT-2 Answer-only|17.0|
|GPT-2 CoT | 43.9 |
|Plaid Answer-only |12.4|
|Plaid DoT | 37.7|
|SEDD Answer-only|29.1|
|SEDD DoT|45.7|
**Limitation: Ablation study and general performance**
Thank you for your suggestion. We have listed the comparison with the no-DoT fine-tune and will add it to the paper. The current ablation experiments further validate DoT’s effectiveness.
In this work, we mainly focus on the reasoning ability of models including both logical reasoning and mathematical reasoning. For more general testing like performing a variety of complex tasks such as for ChatGPT, further advancements are still required to enhance the scalability of diffusion language models for general ability, as described in our limitation section.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer E7FJ,
Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments two days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concerns if there are any. If you have any further questions, we are happy to discuss them!
Best regards,
Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer E7FJ,
Thank you for your valuable time to review our work and constructive feedback. As the discussion period draws to a close, we would appreciate it if you could kindly take a look at our response to your comments. If you have any further questions, we are happy to discuss them!
Thanks very much!
Best regards,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: * The authors propose DoT, a chain of thought method for diffusion language models.
* DoT is applicable to both continuous embedding-based diffusion models and continuous-time Markov chain discrete diffusion models.
* DoT shows performance increase on digit multiplication, boolean logic, and GSM8K tasks, as well as tradeoffs in reasonability and efficiency.
* Overall, this is a relevant work in the growing field of diffusion language modeling that applies CoT reasoning from the AR literature.
Strengths: * DoT is applied to both discrete and continuous diffusion language models. Given that there are various formulations of diffusion language models (embedding diffusion, simplex diffusion, masking state / absorption, continuous-time Markov chain), this is a plus.
* The authors also demonstrate DoT both by pretraining small models (standard 12-layer transformer with 6 encoder and decoder layers, respectively) from scratch, as well as leveraging pretrained diffusion models (Plaid, SEDD).
* DoT shows strong performance across multiplication, boolean, and GSM8K datasets, outperforming GPT-2 baselines.
* Empirically, the authors demonstrate that it is possible for diffusion models to have flexibly thought processes, where the model builds off of intermediate thoughts to arrive at the correct answer (similar to AR CoT) or jump to an answer, then correct its intermediate steps.
Weaknesses: * The dataset explored in this work seems rather simple. Although the work understandably builds on top of previous work that employs the same dataset, the fact that baseline models achieve 100% or close to 100% makes it difficult to lucidly compare the baseline with the proposed approach. This applies to both multiplication setups as well as boolean logic, where GPT-2 models already reach 100% even without CoT.
* The authors use throughput as the basis for why DoT is superior to AR CoT when both methods achieve 100%. This is a slightly weaker argument because throughput for diffusion models critically depends on a number of hand-crafted parameters, such as the number of backward steps and the model context length. These parameters are orthogonal to DoT. It is possible that the particular setup of this work was favorable to diffusion, but not necessarily so in the general case. Moreover, AR models can leverage key-value caching to speed up generation, whereas diffusion models cannot. I am not sure if I am entirely convinced that DoT would generally be faster than CoT in the wild.
* Although CoT with diffusion models is a new area, the methodology itself is not entirely novel, as it appears to be an adaptation of DiffuSeq-style masking applied to CoT training data (i.e., give the model a question as its prefix context, and diffuse over the answer + CoT intermediate steps).
Technical Quality: 3
Clarity: 3
Questions for Authors: * Could you quickly clarify what you mean in the first sentence of Section 3.2? I was not able to find a direct connection between the motivating claim and Table 2.
* Did you notice a big difference in the quality or diversity of the diffusion model output when softmax temperature smoothing was not applied?
* Do you train the model to predict padding tokens so that it can output sequences of variable lengths? If so, does the diffusion model always generate 128 tokens?
* Scheduled sampling essentially uses the prediction from the previous timestep instead of the noised ground truth as the condition to the diffusion model (similar to how teacher forcing is stochastically applied when training autoregressive models). This seems like a variation of self-conditioning [1, 2, 3], which incorporates model predictions from the previous timestep to generate predictions at the current timestep. It would be instructive to delineate any similarities and differences between self-conditioning and the proposed scheduled sampling.
* In the GPT-2 throughput benchmarks, did you enable K-V caching, flash attention, and other standard techniques for accelerating the forward pass?
* Table 3 demonstrates the GSM8K CoT format used in this work. Did you preprocess the dataset to strip all natural language and extract only `<< blah >>` expressions? If not, did you find that the model was able to generate coherent natural language expressions that "made sense" along with the equations and the final answer?
---
[1] Self-conditioned Embedding Diffusion for Text Generation. Strudel et al. 2022. \
[2] Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning. Chen et al. 2022. \
[3] TESS: Text-to-Text Self-Conditioned Simplex Diffusion. Mahabadi et al, 2024.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: * The authors note that fully pretrained diffusion models are sparse, and that most diffusion language models remain at the small parameter regime (GPT-2).
* Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer abfr for the review and are grateful for the time you spent with our submission. We wish to address your confusion and concerns by providing detailed responses to each of your comments.
**Weakness 1: Simple datasets**
The reasoning ability contains arithmetic reasoning and logical reasoning. From the experimental results, we validated that DoT performs as well as GPT models after fine-tuning on the boolean logic dataset. Yet, arithmetic reasoning such as grade school math is a more challenging task for all models. Besides, by introducing the relatively simple datasets, we aim to demonstrate that **DoT not only performs well on relatively simple tasks but also exhibits a higher efficiency compared to GPT models**.
**Weakness 2: Throughput**
Thank you for bringing up this point. When we show the throughput of DoT is superior to AR-CoT, what we want to emphasize is not purely throughput but its flexibility. As a hyperparameter, diffusion timesteps are determined through a held-out validation set. The number of time steps is influenced by the complexity of the task at hand. For instance, in simpler tasks such as digit multiplication, only a small number of timesteps is sufficient to obtain desirable performance. However, in more challenging reasoning tasks like GSM, we can increase the number of timesteps to enhance the performance. In this case, the final throughput is not superior to AR-CoT but we can achieve better results. In other words, we can spare more time on “thinking” on complex tasks (this interesting idea is introduced in Section 4.4 and Figure 4a). **We argue that this flexibility is exactly the advantage of DoT over auto-regressive CoT models**.
Moreover, AR models can utilize key-value caching to enhance throughput during generation, but they still decode token-by-token for longer outputs, whereas diffusion models operate with fixed timesteps for longer context. Also, by removing kv-caching memory, the inference of diffusion models can be conducted with larger batch size, as mentioned in the SEDD paper [1]. In our paper, we highlight the potential efficiency advantage of DoT in our tested cases, and we believe that the efficiency of diffusion models in the wild is another interesting topic and worthy of further investigation.
[1] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution, ICML 2024.
**Weakness 3: novelty of the methodology**
In terms of various approaches that can be used in the fine-tuning stage of diffusion models, DoT is non-trivial. Other alternatives include directly fine-tuning Plaid 1B and using back-gradients to control the generation, and directly fine-tuning DoT models initialized with GPT2 parameters as shown in the Appendix. Compared with them, **the current DoT model stands out as the most effective. These empirical findings have not yet been addressed in any other existing literature**.
Moreover, we propose the multi-pass variant of DoT and two sampling strategies to further enhance the performance, which we believe are significant in the diffusion realm.
Finally, our paper also extensively discussed **the potential advantages of DoT over autoregressive models particularly for reasoning tasks**, such as the reasonability-efficiency trade-off, and the self-correction, which have not been explored to the best of our knowledge.
**Q1: Connection between the first sentence of Section 3.2 and Table 2**
Thanks for pointing out this potential confusion. The Plaid model is based on the continuous embedding space, and the model relies on gradient-based guidance to control the token generation. If we briefly follow this model to do the continue-training on the fine-tuning datasets (first row in Table 2), the performance is poor and we believe it is because the gradient-based guidance fails to do accurate conditioning. This motivates us to use DiffuSeq-style training. Please refer to the comments for detailed examples.
**Q2: Sampling temperature**
In our experiment, we found that enabling softmax temperature would slightly decrease the quality compared to greedy decoding (accuracy gap within 1%). But the diversity can be enhanced. As a result, we find a noticeable performance boost (4% on Plaid and 12% on SEDD) after performing self-consistency marginalization. We set the softmax temperature to be 0.5 according to the validation set in experiments.
**Q3: Variable lengths**
Yes, we use [PAD] to control the output length. The model will generate 128 tokens in total but all the latter [PAD] will be removed.
**Q4: Connection between schedule sampling and self-conditioning**
Thank you for bringing up this great question. The similarity between our schedule sampling and self-conditioning is that they both consider the condition of the model predicted sequence. However, they serve different purposes and are complementary in nature. The goal of self-conditioning is to use the previously estimated $\tilde x_0$ as additional feature besides the original $x_t$, so the network models $f(x_t,\tilde x_0,t)$, where $x_t$ is always corrupted from the oracle data.
While the goal of our scheduled sampling is to add inference-time noise to $x_t$ to be consistent with the inference stage. So the network learns to model $f(\tilde x_t,t)$. **They are based on different purposes, and can be used together.** For example, we can potentially model $f(\tilde x_t,\tilde x_0, t)$ by providing $\tilde x_0$ as an additional feature.
**Q5: Details for throughput results**
For GPT models, we use KV-caching when decoding. For all models, considering both the small model size and context size, we didn’t use flash-attention.
**Q6: Details for GSM-Aug dataset**
Following the dataset setting in paper implicit-CoT, we keep the natural language in problem description but remove the natural language in the CoT response and only keep symbolic expressions in `<<>>`.
---
Rebuttal Comment 1.1:
Title: Detailed examples for Q1
Comment: Below we show an example on grade school math as a demonstration, where **bold** words in the query part are incorrectly recovered. We can see there are four recovered query tokens that exhibit minor differences due to soft gradient guidance, causing interference with the model's comprehension of the problem. That’s why we resort to hard control with gradient-free conditioning. We will add more details to clarify this confusion in the final version.
>Groundtruth: Two trains leave San Rafael at the same time. They begin traveling westward, both traveling for 80 miles. The next day, they travel northwards, covering 150 miles. What's the distance covered by each train in the two days? <<2*80=160>> <<150*2=300>> <<300+160=460>> <<460/2=230>> #### 230
>Prediction: **Three** trains leave San **Juan** at the same time. They **start** traveling westward, both traveling for 80 miles. The next day, they travel **southward**, covering 150 miles. What's the distance covered by each train in the two days? <<3*80=180>> <<180+80+150=340>> <<340/ 30=12.5>> #### 12.5
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer abfr,
Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments two days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concerns if there are any. If you have any further questions, we are happy to discuss them!
Best regards,
Authors | null | null | null | null | null | null |
Linguistic Collapse: Neural Collapse in (Large) Language Models | Accept (poster) | Summary: This paper empirically investigates the emergence of Neural Collapse (NC) properties during the training of causal language models. NC is a phenomenon observed in the top layer of deep-nets trained on one-hot classification problems, where the last-layer class-mean embeddings become equinorm, have maximal angular separation and align with their respective last-layer classifier. However, NC only emerges when: 1) the models are trained beyond the zero training error regime, 2) the number of classes $C$ is larger than the last-layer embedding dimension $d$, 3) the classes are balanced (they have the same number of samples in the training set).
Considering causal language training as the classification of contexts across $C$ words in the vocabulary set, this work explores the extent to which NC emerges in language models, given that none of the three conditions above hold for language models: 1) language models are typically trained for a few epochs, 2) the number of classes $C$ is large, 3) the words frequency in the training set is heavily imbalanced. Since the original NC geometry is not achievable, particularly without condition 2, the authors define *generalized* NC metrics to measure the geometrical properties of the last-layer embeddings and classifiers. They train NeoGPT with varying numbers of layers and width $d$, and analyze the correlation between the (generalized) NC metrics and validation loss across different training regimes.
Strengths: The problem setup is interesting as it attempts to extend previous observations in deep learning to the trending language models, despite their distinct configurations. The scope of the study, its connection to previous works, and the limitations of the setup are well-presented in the paper.
Weaknesses: It is not clear to me whether the suggested NC metrics are suitable for the language setup. For instance, since in a language dataset, a given fixed context might be followed by different next tokens, a model that is accurate in predicting the labels on the training set cannot achieve near-zero NC1. Or regarding UNC3, see question 4 below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is your setup only specific to causal language modeling? Particularly, a) Is any part of the formulation or results influenced by the causal/autoregressive training, or is it simply related to the different nature of language datasets in general, such as the large number of classes and the possibility of having multiple labels for a given training sample?, b) Do you expect similar behavior if the experiments were conducted using other language training methods, such as masked language modeling?
2. I don’t understand what you mean by *ambiguous* samples, ``which are not soft- or multi-label’’ (line 128). When a context appears several times in the training set, each time followed by a different next token, this training sample has a soft label, where the label (next token) can take on different values with some non-zero probability for each.
3. (Related to the previous question) What is classification error (line 130) in the context of next-token prediction and what do you mean by "irreducible noise’’? How do you define the error for contexts that appear several times with different next tokens/labels?
4. Intuitively, why do you expect that a model minimizing UNC3 has better generalization? Minimizing NC3 means the classifiers and mean vectors are aligned which is consistent with the classification objective. However, minimizing UNC3 (CoV of the term in Eq (8)), only implies the degree of misalignment between the classifiers $w$ and mean embeddings $\mathbf{\mu}$ is uniform across classes/samples. How does this connect to generalization?
5. Do you observe a decrease in the NC metrics and validation loss as you train the models longer (e.g., 1 vs 10 epochs)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I don’t understand what you mean by *ambiguous* samples, ``which are not soft- or multi-label’’ (line 128). When a context appears several times in the training set, each time followed by a different next token, this training sample has a soft label, where the label (next token) can take on different values with some non-zero probability for each.
The reviewer is correct that such ambiguous contexts do resemble soft-label data examples, and we will edit the relevant paragraph in the Related Works to reflect this.
> It is not clear to me whether the suggested NC metrics are suitable for the language setup. For instance, since in a language dataset, a given fixed context might be followed by different next tokens, a model that is accurate in predicting the labels on the training set cannot achieve near-zero NC1. Or regarding UNC3, see question 4 below.
We agree that such ambiguous (soft-label) samples such as “Once upon a time ___.” do exist. However, we note that such contexts are more often shorter subsequences typically found in the beginnings of sequences. In longer sequences, the probability of having two identical contexts followed by a different word is diminishingly small. Additionally, longer contexts significantly outnumber the shorter sequences in the training dataset, especially when data and models work with ever longer contexts; this is progressively true in models with increasingly long context windows, such as Llama 3.1 (https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). In the abstract, shorter (ambiguous) contexts could even be excluded or relegated as outliers in the data. Therefore, we conclude that it is likely not an issue overall.
On the other hand, we believe it would still be interesting to extend our work to incorporate soft labels and multi-label approaches. This could be explored using recent advancements in multi-label neural collapse (https://arxiv.org/abs/2310.15903) and mixup neural collapse (https://arxiv.org/abs/2402.06171). We hope that our work serves as a strong first step in modeling NC at the next-token level.
> Is your setup only specific to causal language modeling? Particularly, a) Is any part of the formulation or results influenced by the causal/autoregressive training, or is it simply related to the different nature of language datasets in general, such as the large number of classes and the possibility of having multiple labels for a given training sample?, b) Do you expect similar behavior if the experiments were conducted using other language training methods, such as masked language modeling?
We appreciate the questions. We don’t believe our setup is strictly specific to causal language modeling.
1. The adverse conditions we list in our Introduction are mostly due to the imbalanced nature of tokens in natural language data. Intuitively, they would apply beyond autoregressive modeling.
2. We expect to see similar results in masked language modeling. However, it may depend on which tokens are frequently masked in the data (if a specific scheme is used). We hypothesize that one could conduct experiments with similar results on bidirectional encoders such as BERT or T5 (or their derivatives) but we imagine they’d present additional challenges such as lower token sample efficiency of the MLM paradigm, or the simultaneous prediction or fill-in of multiple dependent/correlated token predictions.
> (Related to the previous question) What is classification error (line 130) in the context of next-token prediction and what do you mean by "irreducible noise’’? How do you define the error for contexts that appear several times with different next tokens/labels?
The loss is token cross-entropy and the error we use is the average mis-classification rate over all tokens. We do not treat ambiguous samples differently and instead defer to the diminishing proportion argument previously made regarding soft-labels. Irreducible noise refers to some minimum loss and error that can't be reduced by more scaling simply because of soft-label contexts.
> Intuitively, why do you expect that a model minimizing UNC3 has better generalization? Minimizing NC3 means the classifiers and mean vectors are aligned which is consistent with the classification objective. However, minimizing UNC3 (CoV of the term in Eq (8)), only implies the degree of misalignment between the classifiers w and mean embeddings μ is uniform across classes/samples. How does this connect to generalization?
Our initial manuscript didn't include our reasoning behind UNC3 and we neglected to provide it in our Appendix, so we thank the reviewer for raising this oversight.
The goal of (self-)duality is to minimize the angles between mean vectors and their corresponding classifiers. This can be measured through the expectation (over class or token pairs) of the squared angle between vectors: $\mathbb E[\theta^2]$.
$$\mathbb E[\theta^2] = \mathbb E[\theta]^2 + \text{Var}[\theta]$$
NC3 measures the average angle $\mathbb E[(\theta)]$, whereas our newly introduced UNC3 measures the variances in the angles $\text{Var}[\theta]$. Achieving duality ultimately requires minimizing both terms.
Once again, we appreciate this comment and will add a short discussion on duality decomposition in $\S$ 3.6 and $\S$ 4.6.
> Do you observe a decrease in the NC metrics and validation loss as you train the models longer (e.g., 1 vs 10 epochs)?
Yes. Appendices D, E, F, G, H, I, J, L, and M all show trends towards NC across training (left-to-right) in sufficiently large models. We show more explicit scatter plots (red = more parameters) in the 1-page PDF supplied in the general rebuttal. That PDF also includes validation loss with respect to training. We will also add these figures to the Appendix of our main manuscript.
---
Overall, we appreciate the constructive comments of the reviewer. Should our response address the concerns raised, would you consider raising the score? Thank you.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response.
Thanks for clarifying NC3. The intuition appears to rely on both NC3 and UNC3 being correlated with generalization, but Figs. 3, 19, and 21 show negligible/no correlation for NC3. This makes arguments like the one in line 255 somewhat misleading. Defining a metric that correlates with generalization doesn't make it a better replacement for an NC metric that (based on your arguments) is also likely to be tied to generalization. I recommend clarifying this issue in the updated manuscript.
Overall, I still feel like the technical novelties for adequately addressing the intricacies of language setups are limited in this paper. However, I also acknowledge that the paper, along with its extensive experiments, can motivate further investigations into extending the NC literature to language tasks. Thus, I will slightly increase my score | Summary: This work focuses on studying the Neural Collapse phenomenon in the context of language model training. The author first introduces the original NC properties, explaining how such metrics may not apply to the case of LLM training given 1) the ambiguity of language next token prediction, 2) large number of possible tokens , 3) token imbalance and 4) under-parametrization or undertraining of LLM models.
Having established this, the authors introduce new metrics based on the NC ones. Further, the authors analyze the relationship between these metrics and validation loss on a range of models trained on TinyStories dataset. In this regard, 1) collapse of token representation ($ NC_1 $), 2) Hyper-spherical uniformity which represents maximal separation ($ G-NC_2 $), 3) alignment between token feature mean and token classifier ($ U-NC_3 $) and 4) NCC accuracy of features ($ NC4 $) have positive (and stronger) correlation with validation loss.
Strengths: The connection established between Neural Collapse and LLM training is an interesting point. The introduction of the new NC metrics are not only viable for LLMs but also for other classification and training regimes that suffer from similar problems as the language domain.
Further the experiments on TInyStories are performed on a large number of models to establish a connection between the metrics and validation loss and the results could be used to develop evaluation metrics for LLM training.
Weaknesses: 1) **(Major)** I appreciate the extensive experimental work on establishing a connection between the proposed NC metrics and validation accuracy; however, I believe the coefficient of determinations provided in Table 1 suggest a low correlation between NC properties and validation loss. While I see the higher correlation for NC1 and NC4 in Fig 1, I’m having a hard time convincing myself that other factors such as model size and architecture don’t play a role in improving the loss.
2) **(Major)** While I understand the attempt at making the autoregressive next token prediction of LLMs comparable to a simple classification setup, I do not think I agree with the “not soft label, not multi label” argument. As the authors suggest, the language format is rather ambiguous. Considering a simple next word prediction, “I went to the ___ “ can be followed by a variety of valid words. Same applies to token baked predictions. I believe not having a mechanism to account for this and dismissing the multi-lableness or soft-lablenss of language removed much of the uniqueness and difficulty of dealing with LLM training.
3) **(Major)** Considering how languages themselves have an imbalanced nature, I wonder whether the suggested ETF of hypersphere features and weights geometric structure are actually optimal for LLMs. As an example, in the case where for a CIFAR10 classifier, we train the model under artificial imbalance, I would understand why symmetric features could potentially help improve test accuracy given the balanced nature of the validation set. However, I have a hard time being convinced the same is true in language.
4) **(Minor)** I find it hard at times to follow the paper’s ideas. Particularly with regards to section 4, I believe a bit more structure or separation of ideas could help readers better digest the conclusions.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Is it true that NC properties are attributed to better generalization for classification ? Could the authors please provide some references.
2) Have the authors considered using smaller or simpler language datasets to combat the problems with large numbers of classes or have models be trained for a longer time in order to see better convergence results for the NC metrics as illustrated in the appendix ?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **(Major)** I appreciate the extensive experimental work on establishing a connection between the proposed NC metrics and validation accuracy; however, I believe the coefficient of determinations provided in Table 1 suggests a low correlation between NC properties and validation loss. While I see the higher correlation for NC1 and NC4 in Fig 1, I’m having a hard time convincing myself that other factors such as model size and architecture don’t play a role in improving the loss.
To clarify, we don't claim that size and training don't contribute to reducing the loss. Our principle observation is that scaling improves both performance (as expected) and NC, and that the two are associated (shown in Figure 1).
To address the concern that scaling alone is the confounding factor, we conducted a seed sweep ($\S$ 4.5) to test an independent correlation. Table 1 shows that most correlations between NC and generalization are statistically significant. Equinormness is not correlated but only accounts for some of NC, a property that is superseded by GNC2 anyway. The only real exception is therefore UNC3, which seems to be entirely confounded by model scale. (Although this does highlight the effect of NC3 on performance independent of scale.)
> **(Major)** While I understand the attempt at making the autoregressive next token prediction of LLMs comparable to a simple classification setup, I do not think I agree with the “not soft label, not multi label” argument. As the authors suggest, the language format is rather ambiguous. Considering a simple next word prediction, “I went to the ___ “ can be followed by a variety of valid words. Same applies to token baked predictions. I believe not having a mechanism to account for this and dismissing the multi-lableness or soft-lableness of language removed much of the uniqueness and difficulty of dealing with LLM training.
Thanks for the detailed comment. Firstly, we agree with the reviewer that there are indeed ambiguous samples, particularly with short sentences, such as the example “I went to the ___.” These cases can resemble soft label data examples, and we will edit the relevant paragraph in the Related Works to reflect this.
We note however that ambiguous contexts are typically shorter and found at the beginning of sequences. For longer sequences, the probability of two identical contexts followed by a different word is low. Additionally, longer contexts outnumber shorter ones in the training data, especially as LLMs (such as Llama 3.1) handle longer contexts. Shorter contexts could even be excluded or treated as outliers. Thus, we conclude that these anomalies likely represent a minority of the data and aren't a significant issue overall. Nonetheless, we take the first step in NC at the next-token level, and future work might incorporate multi-label (arXiv:2310.15903) and mixup (arXiv:2402.06171) NC.
> **(Major)** Considering how languages themselves have an imbalanced nature, I wonder whether the suggested ETF of hypersphere features and weights geometric structure are actually optimal for LLMs. As an example, in the case where for a CIFAR10 classifier, we train the model under artificial imbalance, I would understand why symmetric features could potentially help improve test accuracy given the balanced nature of the validation set. However, I have a hard time being convinced the same is true in language.
Our primary objective is to investigate the structures that form in standard LLM training. We recognize that the convergence geometry in current LLMs may not be optimal, likely due to using CE loss (adapted for balanced data) on an imbalanced dataset. We don't not claim the simplex ETF or hypersphere are optimal; rather, we focus on characterizing the patterns that emerge at scale in relation to baseline geometries. However, recent works have proposed methods for addressing data imbalance (arXiv:2301.00437v5, arXiv:2301.01100, and MLR: Liu et al. (2023)). We hope our insights may lead to adaptations of such methods to optimize these structures that address the imbalanced nature of language
> **(Minor)** I find it hard at times to follow the paper’s ideas. Particularly with regards to section 4, I believe a bit more structure or separation of ideas could help readers better digest the conclusions.
We sympathize with this sentiment. We'll move 4.5 through 4.7 (including Table 1) into a separate section.
> Is it true that NC properties are attributed to better generalization for classification? Could the authors please provide some references?
The original NC showed some relationship to generalization. In particular, we highlight the results in transfer learning (arXiv: 2112.15121) and a generalization bound that can estimate test performance (arXiv: 2202.09028).
We aren't aware of other works that explicitly studied the relationship between NC and generalization. However, our results appear to reinforce the generalization narrative as we observe correlations long before the terminal phase of training, some of which are independent of scale. We hope to inspire future works that might definitively answer this open question.
> Have the authors considered using smaller or simpler language real datasets to combat the problems with large numbers of classes or have models be trained for a longer time to see better convergence results for the NC metrics as illustrated in the appendix?
Thank you for your insightful suggestion. We considered using smaller or simpler language data to potentially improve convergence. However, we focus on large-scale language modeling scenarios to ensure our findings are robust and reflective of contemporary practices. This would render our results applicable and generalizable to real-world applications. The synthetic nature of TinyStories is a caveat to our study (as the reviewer implies), but we provide a lengthy explanation for its use over real data in the general rebuttal above.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the elaborate response. With regards to the the multi-label context I understand the arguments regarding the impact of context lenght and I appreciate the explanation. I would like this point to be further clarifier in future revisions. I would further like to encourage evaluation metrics other than test loss to validate the relationship between NC properties and improving LLM quality. Having said this, I would like to increase my score to a 5. | Summary: The final layers of classifiers show a property called neural collapse (NC), which are seen as beneficial to model performance. The authors study it’s appearance in causal language models (CLMs), and point out that CLMs do not respect those conditions (CLM are trained on noisy, unbalanced data, made on more tokens than the models have dimensions, and training is stopped before loss reaches 0).
They then provide a theoretical framework to adapt the NC properties to CLMs (arguing for two relaxations from past works: measuring hyperspherical uniformity, and uniform duality), and consider show that despite expectations, evidence of NC exists, and is stronger for better performing models.
Strengths: Provided measured and well formulated novel results regarding the impact of scale, training, number of parameters on the different components of generalisation. While a highly technical read which required a lot of concentration, all the required information for understanding the concepts are introduced. Conclusions are measured and entirely based on evidence from theory and empirical evidence, yet discussion manages to highlight the implication of such research. Exciting mathematical tools I hope to use in the future.
Weaknesses: Apart from evaluation loss, model performance metrics from commonly used LLM benchmarks are not provided, making high level comprehension of the technical observations more complicated.
Technical Quality: 4
Clarity: 4
Questions for Authors: considering CLMs to be classifiers is technically sound, nonetheless unlike with classification tokens can mean different things (ex:homonyms) these might also explain divergence from the classic NC model. Did you observe any such groupings or effects?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are to my knowledge well adressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Apart from evaluation loss, model performance metrics from commonly used LLM benchmarks are not provided, making high level comprehension of the technical observations more complicated.
LLM researchers are indeed interested in performance metrics beyond the cross-entropy (CE) evaluation loss that we use. However, as our work concerns the measurement of NC under model pre-training only, other benchmarks (particularly those on downstream tasks) would not necessarily be appropriate as they measure specific capabilities rather than generic stochastic token prediction; therefore, they would be out-of-scope for our work.
Furthermore, according to several recent works on LLM evaluations (https://arxiv.org/abs/2403.15796, https://arxiv.org/abs/2404.09937, https://arxiv.org/abs/2407.06645), downstream capabilities of LLMs are roughly correlated with their abilities to compress their pre-training data. Based on these findings, we find that CE loss is the most sensible metric with which to measure generalization.
We thank the reviewer for raising this question as we see fit to include a short discussion on this and potential future directions towards the end of our manuscript (citing the aforementioned works).
> Considering CLMs to be classifiers is technically sound, nonetheless unlike with classification tokens can mean different things (ex:homonyms) these might also explain divergence from the classic NC model. Did you observe any such groupings or effects?
Appreciate the question! We inspected token-wise NC values in our largest and most-trained model (12-layer d=1024, 10 epochs). Based on your suggestion, we chose 15 homonyms and found that most have much shorter mean vector norms (meaning they’re closer to the global center) than the average token. This makes sense as homonyms present conflicts and interference.
| GPT-Neo Token ID | Token Text | Scaled Norm |
| ---------------- | ---------- | ----------- |
| 808 | row | 102.9727 |
| 1806 | ring | 63.4789 |
| 2971 | light | 74.4070 |
| 4053 | well | 55.3457 |
| 4475 | date | 111.7086 |
| 8664 | bat | 83.0259 |
| 9464 | left | 81.7467 |
| 15699 | match | 114.7236 |
| 16469 | spring | 80.0322 |
| 17796 | bank | 90.8248 |
| 19204 | wave | 43.5028 |
| 19836 | close | 60.6004 |
| 22043 | fair | 57.2620 |
| 28230 | lead | 102.0583 |
| 36859 | bowl | 88.7034 |
| AVERAGE | AVERAGE | 106.8762 |
We also observed that the individual variability and interference of some English first names all other tokens were far below the average. This is also intuitive as names are distinct from one another and aren't typically used in the same contexts as other words (aside from articles).
| GPT-Neo Token ID | Token Text | CDNV | Interference |
| ---------------- | ---------- | ------------ | ------------ |
| 7371 | Donald | 8.176141e-05 | -0.008308 |
| 7554 | John | 0.000108 | -0.007270 |
| 11006 | David | 9.249406e-05 | -0.006167 |
| 12041 | Paul | 9.661650e-05 | -0.006768 |
| 13256 | Michael | 8.857495e-05 | -0.006625 |
| 14731 | James | 0.000101 | -0.006610 |
| 14967 | Tim | 0.000164 | -0.004741 |
| 17121 | William | 9.332073e-05 | -0.005867 |
| 18050 | Jim | 0.000109 | -0.006896 |
| 18308 | Harry | 0.000110 | -0.006749 |
| 19156 | Robert | 8.133316e-05 | -0.006438 |
| 19206 | Steve | 9.622102e-05 | -0.006541 |
| 19962 | Daniel | 0.000114 | -0.006963 |
| 20191 | George | 0.000108 | -0.006674 |
| 20508 | Andrew | 9.280709e-05 | -0.006036 |
| 21868 | Ryan | 8.332534e-05 | -0.006594 |
| 22405 | Thomas | 9.526886e-05 | -0.006400 |
| 23865 | Kevin | 9.392182e-05 | -0.005817 |
| 24119 | Mary | 8.846451e-05 | -0.006322 |
| 24761 | Brian | 8.554082e-05 | -0.006003 |
| 24778 | Martin | 9.663536e-05 | -0.007183 |
| 25004 | Eric | 9.731254e-05 | -0.006565 |
| 25372 | Matthew | 8.109680e-05 | -0.005544 |
| 28711 | Charles | 7.878443e-05 | -0.006371 |
| 29284 | Sarah | 7.926711e-05 | -0.006366 |
| 30730 | Luke | 0.000100 | -0.005830 |
| 31160 | Anna | 0.000175 | -0.005005 |
| 32476 | Henry | 0.000109 | -0.006685 |
| 32697 | Anthony | 6.594844e-05 | -0.006055 |
| 34831 | Kelly | 8.580151e-05 | -0.006017 |
| 40656 | Robin | 0.000118 | -0.006808 |
| 42516 | Kyle | 8.159300e-05 | -0.005333 |
| 43187 | Jennifer | 8.750914e-05 | -0.006629 |
| 43568 | Elizabeth | 6.347037e-05 | -0.006462 |
| 43687 | Laura | 8.584216e-05 | -0.005922 |
| 44484 | Alice | 9.208282e-05 | -0.006416 |
| 45572 | Jessica | 8.405318e-05 | -0.005614 |
| 46751 | Jacob | 9.124901e-05 | -0.005349 |
| AVERAGE | AVERAGE | 0.000177 | 0.000519 |
There are probably thousands of groupings or linguistic relationships one could observe in the NC measurements, so we’ll leave such interpretability to future applications.
---
We thank the reviewer for the thought-provoking questions. We hope our responses adequately address them and warrant a slightly higher score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, those additional results kind of also fit in with my other question on common LLM benchmarks, they provide examples of use cases of the mathematical framework. I maintain my opinion that this is an excellent work, and maintain the high score. | Summary: This paper investigates neural collapse (NC) -- properties of the penultimate feature representation of DNN -- in causal language models (CLM). Previous NC studies have primarily focused on classification problems with balanced classes with few labels compared to the feature dimensionality. This paper finds that NC properties in CLM are correlated with generalisation. This finding is based on extensive experimental evaluations on the TinyStories dataset.
Strengths: * The NC analysis is interesting and it's noteworthy that properties of NC appear to correlate with generalisation in CLMs, which differ in important ways from other classification settings where NC has been studied.
* The empirical results seem to paint a clear trend in terms of NC and generalisation.
* The presentation is generally quite good, with contributions and significance clearly spelled out.
Weaknesses: * I’m somewhat unclear on the motivation for using TinyStories, a purely synthetic dataset. While I understand the benefit of using a small dataset to ease the computational burden of the experiments, I would have liked to see some further experiments on real data confirming the observed trends. For example, you could sample a small dataset of human-written stories.
* I would have liked to see more practical guidance on the value of the proposed metrics. For example, it’s not clear if they provide any value in terms of assessing model fit beyond what a held-out validation would (more cheaply) provide.
* In several places, the wording is unusual / confusing. For example, L47-L48 says that "[...] LLM learn to model aleatoric
uncertainty, which can be viewed as stochastic token prediction," which is a peculiar/wordy way to describe a standard MLE procedure.
* While the experiments are fairly comprehensive, the technical novelty is relatively low. The paper mostly consists of methods from prior works being applied to a new classification setting.
Technical Quality: 4
Clarity: 4
Questions for Authors: See "Weaknesses."
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > I’m somewhat unclear on the motivation for using TinyStories, a purely synthetic dataset. While I understand the benefit of using a small dataset to ease the computational burden of the experiments, I would have liked to see some further experiments on real data confirming the observed trends. For example, you could sample a small dataset of human-written stories.
This crucial question arose during our research, as using a synthetic dataset may limit the generality of our results to human-written data (see our lengthy explanation in the general rebuttal). We acknowledge the importance of validating our findings with real data and are aware of follow-up efforts to scale our experiments with appropriately sized models and datasets.
> I would have liked to see more practical guidance on the value of the proposed metrics. For example, it’s not clear if they provide any value in terms of assessing model fit beyond what a held-out validation would (more cheaply) provide.
NC has been studied in and in relation to many areas of machine learning. We defer the vast related works and applications to the Related Works and Discussion sections of our manuscript, while our response here will therefore be tailored towards practical consequences and applications of studying NC in CLMs.
1. Better optimization strategies: LLMs are typically pre-trained with CE loss in teacher-forcing. It stands to reason that works listed in the “Learning to collapse” Discussion paragraph can be applied or inspire better objective functions towards potentially better geometric configurations for language models. Hierarchical structures like those presented by Liang & Davis (2023). The implication is that future LLMs could be trained in fewer steps or to better performance.
2. Interpretability of contextual token-wise behaviors: our measurements provide insight into behaviors of individual token embeddings. Token-wise NC3 can reveal how confident an LLM is about certain tokens in their top-layer classifiers. For example, the number and density of clusters for a particular token can shed light into its various meanings and uses. This would be particularly useful as LLMs adapt to ever-evolving use of language and further expansion into non-English domains.
3. Interpretability of pair-wise interactions: our measurements primarily center around pair-wise relationships between tokens in a vocabulary. The pair-wise arrangements are critically important to modeling interference between tokens because their context vectors live on a lower-dimensional hypersphere. For example, you can tell how related or interchangeable two words are based on their noise and interference (NC1,2), or how antithetical or unrelated they are based on orthogonality (NC2).
4. Interpretability for fairness: natural language is inherently imbalanced. Individual or pair-wise analyses of representations can reveal inequalities within the vocabulary or across topics. These insights can guide researchers and engineers to address inequities in a targeted and measurable way.
5. Interpretability for ethics: if there are concerns about biases or safety issues in general language modeling, NC analysis can aid researchers in interpreting how LLMs could pose risks to users, and how to safely mitigate risks without degrading model quality.
6. Continual learning: [19, 20, 21] cited studied NC in (task-)incremental learning problems and provided insights or improvements. Since an LLM (especially a foundation model) is to continually learn language patterns and develop capabilities, NC can provide carefully structured configurations that allow for graceful lifelong learning.
Of course it remains to be seen in exactly what form or role NC will play in these areas of further studies. It is also probable there are benefits of NC analysis beyond our knowledge. Our work here is to take the first step in extending the body of NC work into the more irregular and demanding settings of causal language modeling.
We are grateful to the reviewer for probing this response. We will further expand our Significance and Discussion (sub)sections to explicate these practical guidelines.
> In several places, the wording is unusual / confusing. For example, L47-L48 says that "[...] LLM learn to model aleatoric uncertainty, which can be viewed as stochastic token prediction," which is a peculiar/wordy way to describe a standard MLE procedure.
We agree with the reviewer and will revise L47-48 to “[…] learn to stochastically predict tokens to generate text.” We also identified the following opportunities for rewording or clarification:
1. L107-108 will be adjusted: “[…] classes number in the tens of thousands.”
2. L128 will be clarified such that ambiguous contexts *are* “soft-label” samples.
3. L230-231 will be rewritten: “These noise reductions are associated with generalization (Fig. 1, left, “N C1”); this relationship grows stronger with model size.”
Should the reviewer feel there are more instances that we did not cover, we will happily review further during the discussion period. And of course, we will perform more passes on this paper to improve the writing for future revisions.
> While the experiments are fairly comprehensive, the technical novelty is relatively low. The paper mostly consists of methods from prior works being applied to a new classification setting.
This is a fair point, that we mostly applied previous expressions and methods to the causal language modeling setting. Our contribution focuses on applying the canonical NC framework to more adverse conditions (larger and imbalanced vocabulary, ambiguous contexts, undertraining), the effort of which is reflected in our work. We hope that this first attempt at analyzing NC in this area inspires further work that better adapts to the ever-changing landscape.
---
We are grateful for the helpful comments. Should our response warrant it, we would greatly appreciate the reviewer raising the score.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I maintain my accept recommendation. | Rebuttal 1:
Rebuttal: ## The Choice of TinyStories
The study of NC in causal language modeling at the token level would be very expensive, so the motivation to use a small dataset is clear. However, most commonly used text datasets such as WikiText, BookCorpus, CommonCrawl, or most subsets from the Pile are much too complex and broad to be effectively compressed by CLMs of the scale that we work with.
WikiText-2 and WikiText-103 present significant drawbacks for our experiments. Both datasets contain a considerable amount of low-quality data that doesn't concentrate on essential linguistic structures such as grammar, vocabulary, facts, and reasoning. WikiText-2 has a similar empirical vocabulary to TinyStories under the GPT-Neo tokenizer (27K vs. 29K) but only has around 34K rows of training data compared to 2.1M in TinyStories. Our small-scale NC experiment on WikiText-2 revealed that the models were very brittle and prone to overfitting. On the other hand, WikiText-103 is comparably sized to TinyStories but utilizes around 44K unique tokens. Our CLMs trained on WikiText-103 struggled to produce coherent sentences, likely due to the excessive breadth and information, as noted by the authors of TinyStories. Beyond these two, we were unable to find any real datasets that both followed established scaling laws (Kaplan et al., Hoffman et al.) for CLMs at our scale and are simple enough to suit the analysis of NC.
This is where the TinyStories dataset becomes invaluable. Their manuscript (Eldan & Li) from last year informs much of our reasoning. This is an excerpt from their manuscript:
> We introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still **produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.**
According to its authors, it’s explicitly designed to preserve the essential elements of natural language, such as grammar, vocabulary, facts, and reasoning, while being smaller and more refined in terms of its breadth and diversity. Unlike large corpora that can overwhelm small language models (SLMs) due to their excessive breadth and diversity, TinyStories offers a concentrated dataset that hones in on core linguistic structures and reasoning capabilities. This is evident in its small word vocabulary, consisting of approximately 1500 words that a child would use, and in its 29K empirical vocabulary under the GPT-Neo tokenizer.
Despite its concentrated nature, TinyStories enables models trained on it to produce grammatically correct, factual, and reasonable stories. Additionally, these models can be finetuned on specific instructions found in the TinyStories-Instruct dataset. The authors of TinyStories also demonstrate that their models can creatively produce stories that are dissimilar enough to their training data, indicating a balanced capability for generalization and creativity.
One particular advantage of TinyStories is the small vocabulary relative to total training tokens. The consequence is a reasonable number of classes with higher average token counts. Conveniently, frequency analysis of the overall dataset produced a distribution (Figure 4 in first draft) that is similar to real human language. This is relevant because the ability to measure NC and a CLM’s ability to compress language data into distinct geometries depends partially on the ratios between embedding dimension, vocabulary size, and average token counts. TinyStories provides a good balance for an initial study in this phenomenon.
Additionally, TinyStories has more regular structure as GPT-3.5/4 were instructed to produce children’s stories with certain themes and forms with a limited vocabulary. We believed that this would reduce the amount of clustering noise from the very broad information and structures in real general data, and allow our smaller CLMs to exhibit clear trends towards NC.
Furthermore, TinyStories was created using GPT 3.5/4, which are advanced language models with significantly larger architectures trained on orders of magnitude more tokens, helping minimize the effect of the synthetic nature of the generated dataset. We also considered a possible effect of model collapse as a result of training on synthetic data but Shumailov et al. and some follow-up works suggest that a single iteration of data generation (as generated TinyStories) has a very negligible model collapse.
With all that said, we are deeply grateful to the reviewers who raised this issue. We will include the above explanation at length in our Appendix B and a summary in $\S$ 3.1 where we introduce TinyStories.
### References
- Kaplan et al. (2020): https://arxiv.org/abs/2001.08361
- Hoffman et al. (2022): https://arxiv.org/abs/2203.15556
- Eldan & Li (2023): https://arxiv.org/abs/2305.07759
- Shumailov et al. (2023): https://arxiv.org/abs/2305.17493
Pdf: /pdf/174d96d9d7e0e8fc5530fa081ffe594c794d83be.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Plaintext-Ciphertext Cryptographic Problems via ANF-based SAT Instance Representation | Accept (poster) | Summary: The paper explores an approach for predicting the satisfiability of Algebraic Normal Form (ANF) Boolean SAT instances by graph neural networks (GNN). The approach is similar to the framework of NeuroSAT but a new graph structure is proposed for handling the quadratic terms in ANN. Experiments indicate that the proposed approach reaches better accuracy in predicting the satisfiability on certain ANF instances compared with NeuroSAT on CNF and the prediction time is much faster than the running time of SAT solvers.
Strengths: The paper picks up several interesting and important aspects. First, while CNF-based SAT solvers are dominating and CNF is the most prevalent format in SAT solving, non-CNF constraints, such as ANF, can still be useful in many applications. Encoding them into CNF can harm the performance of SAT solvers and it is desirable to have native solvers for those types of constraints. Second, using machine learning for an end-to-end framework for combinatorial problems has been an active research area.
This paper is clearly written, with technical details at an appropriate level.
Weaknesses: 1. The results of this paper, in my opinion, are not well claimed. The proposed approach based on GNN is only a predictor with errors. In contrast, SAT solvers must find proof for the answers they give, i.e., finding a satisfying assignment or proving unsatisfiability. Comparing the running time of making a prediction by GNN and that of a SAT solver is not fair, which should be clearly stated in the paper.
2. The proposed approach only considers the ANF format up to order 2 while higher ordered cases are not discussed. This limitation makes it doubtful whether the proposed graph structure can be useful in general.
After the rebuttal session, I decided to raise my score from 4 to 5.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed approach be extended to allowing AND terms with more than 2 variables? If so, how should one handle the exponentially many nodes in a graph?
2. Would it be possible for the proposed approach to output a proof for its predication? For example, can you extract a satisfying assignment or an UNSAT core from the embeddings of the GNN?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see the discussion above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer ZcPU (Rating: 4 / Confidence: 4)
Thank you for your thoughtful response and for highlighting the key contributions of our paper. We are pleased that you recognized the appropriate technical depth of our work. Your elaboration on the significance of non-CNF constraints, particularly ANF, and the potential of machine learning in combinatorial problem-solving further underscores the importance of our research. Your astute observations have pinpointed crucial aspects of our work, providing us with an excellent opportunity to expound on the core contributions of our paper. We are delighted to address your points in detail below.
---
We hope that this response clarifies the importance of our work in dealing with non-CNF constraints (especially ANFs) and in solving combinatorial problems using machine learning techniques. We believe that our approach has the potential to open up new avenues for research and practical applications in areas where traditional CNF-based methods may not be ideal. Our method also has great potential for solving higher-order ANF problems. We remain committed to advancing this area of research. We would appreciate it if you would reconsider your rating.
---
#### Q2 "Would it be possible for the proposed approach to output a proof for its predication? For example, can you extract a satisfying assignment or an UNSAT core from the embeddings of the GNN?"
**A1** Thank you for your comment. We can provide proof for our prediction using the following methods:
1. **Key-Solving Process**: In the plaintext-ciphertext cryptographic problem, CryptoANFNet can determine the assignment for a specific bit of the key by following the key-solving process outlined in Section 4.4 of our paper. Using the key-solving algorithm described, CryptoANFNet can sequentially deduce the value of each key bit. As shown in Table 3 it demonstrates its capability in cryptographic key-solving tasks.
2. **Extracting Satisfying Assignments**: CryptoANFNet can also extract satisfying assignments from the embeddings of the literals. To further validate this, we conducted experiments to predict assignments using the embeddings generated by CryptoANFNet on SR datasets. The results, as shown below, illustrate CryptoANFNet's effectiveness in producing assignments from the learned embeddings.
|Dataset|SR(25)|SR(5)-high|SR(10)-high|
|---|---|---|---|
NeuroSAT | 57.0% | 53.5% | 51.0%
CryptoANFNet | **71.5%** | **71.0%** | **63.5%**
---
#### W1 "The results of this paper, in my opinion, are not well claimed. The proposed approach based on GNN is only a predictor with errors. (see weaknesses)."
**A3** Thank you. We acknowledge the concern regarding the potential for prediction errors in ML-based solvers. No model can guarantee 100% accuracy across all datasets. But, as demonstrated in **Section 5.4** of the paper, CryptoANFNet exhibits a significantly faster solving speed compared to traditional solvers. With this rapid solving speed, our model can be effectively utilized as a solver in these practical applications below:
1. As detailed in Section 4.4 of our paper, using the Key-solving algorithm, CryptoANFNet can sequentially determine the value of each specific bit of the key in a plaintext-ciphertext problem.
2. Besides, CryptoANFNet can also extract satisfying assignments from the embeddings of the literals, as shown in **A1**.
In practice, this solver can achieve a preliminary solution for a key or a satisfying assignment with relatively high accuracy. Since the efficiency of verifying the correctness of these solutions is very high, CryptoANFNet offer a significant advantage over traditional solvers in terms of application efficiency, aligning with the primary research objective of our paper.
To demonstrate the feasibility of our approach, we compared the running time of making a prediction by GNN and that of a SAT solver. As shown in Table 4 of our paper, ML-based solvers like CryptoANFNet exhibit nearly 50x faster solving speed. Additionally, Table 3 in the paper and the table in **A1** indicate that CryptoANFNet can obtain a preliminary solution for a key or a satisfying assignment with relatively high accuracy.
---
#### W2&Q1 "The proposed approach only considers the ANF format up to order 2 while higher ordered cases are not discussed."
**A2** Thank you. We will address your concerns from the following three aspects:
1. **In cryptography, MQ (Multivariate Quadratic) problems represented by second-order ANF (Algebraic Normal Form) are of great significance and have wide applications.** As explained in Section 3 of our paper, operations commonly used in cryptographic problems—such as modular addition, XOR, AND, left circular shift, and right circular shift—can be effectively represented using second-order ANF formulas. This conversion of cryptographic problems into MQ problems highlights the broad applicability of second-order ANF. Besides, previous research [cite 1] has already underscored the importance and application of MQ problems in cryptography. Second-order ANF is versatile enough to handle many tasks, demonstrating its extensive range of applications.
2. **Second-order ANF formulas are complete, meaning that they can represent any finite-sized circuit or cryptographic problem.** Research [cite 2] has shown that CNF formulas can be converted into ANF formulas. Since CNF is complete, ANF formulas are inherently complete as well. Furthermore, higher-order ANF formulas can be converted into second-order ANF formulas. For a native method, consider a higher-order ANF formula like $x_1x_2x_3+x_3x_4=0$. By introducing the dummy variable $u_1$ and the equation $u_1+x_1x_2=0$, we can transform the original formula into $u_1x_3+x_3x_4=0$, effectively converting the higher-order ANF formula into a second-order ANF formula. In practice, more efficient conversion methods[cite 3] are proposed to reduce the number of dummy variables.
---
Rebuttal 2:
Comment: 3. **Our proposed method can also address higher-order ANF formulas.** As stated in line 216 of Section 4.3, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. High-order nodes are used only as intermediate results in the message-passing process. Thus, for high-order ANF formulas, i.e., AND terms with more than 2 variables, we have the similar message-passing process shown below:
- For the message-passing process from vanilla literal node to clause node, we have:
$L^{(t)}_{l2c} = L\_{l2l}(M^T\_{l2l}L^{(t)})$
$[C^{(t)}\_{m,\text{pos}}, C^{(t)}\_{m,\text{neg}}] = M_{l2c}^T L_{\text{msg}}(L^{(t)}_{l2c})$
- For the message-passing process from clause node to vanilla literal node, we have:
$L^{(t)}\_{c2l} = M\_{l2c} C\_{\text{msg}}([C^{(t)}\_{\text{pos}}, C^{(t)}\_{\text{neg}}])$
$L^{(t)}\_m = M\_{l2l} L\_{l2m}(L^{(t)}\_{c2l})$
where $M_{l2l}$ is a **sparse** adjacency matrix defined by $M_{l2l}(i, j) = 1$(the i-th vanilla literal appears in the j-th high-order literal) and $M_{l2c}$ is the adjacency matrix defined by $M_{l2c}(i, j) =1$(the $i$-th high-order literal appears in the $j$-th clause).
Using the sparse matrix $M_{l2l}$, we only need to retain the edges between vanilla literal nodes and higher-order nodes, without storing the entire adjacency matrix, which grows exponentially with higher orders. This approach effectively handles datasets comprising SAT instances with literals in various orders.
Note that the number of parameters in CryptoANFNet is independent of the number of nodes in the ANF-based graph generated from a SAT instance. Considering the intermediate results stored by CryptoANFNet, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. During each iteration, CryptoANFNet simultaneously computes intermediate result vectors for up to the number of distinct literals in the SAT instance, which do not grow exponentially with higher orders. Thus, our approach can manage SAT instances with a greater variety of literals.
Here, we show the results on high-order ANF formulas, i.e., AND terms with more than 2 variables:
|Dataset|SR(25)|SR(5)-high|SR(10)-high|
|---|---|---|---|
NeuroSAT | 57.0% | 53.5% | 51.0%
CryptoANFNet | **71.5%** | **71.0%** | **63.5%**
where SAT instances in **SR(5)-high** and **SR(10)-high** contain clauses with 1-5 order ANF literals, generated in the same way as the **SR(n)** dataset. It can be seen that the extended CryptoANFNet still outperforms NeuroSAT on second-order ANF formulas dataset **SR(25)** and better performance on higher-order ANF formulas.
[cite 1] Ding, Jintai, Jason E. Gower, and Dieter S. Schmidt. Multivariate public key cryptosystems. Vol. 25. Springer Science & Business Media, 2006.
[cite 2] Horáček J, Kreuzer M. On conversions from CNF to ANF. Journal of Symbolic Computation. 2020 Sep 1;100:164-86.
[cite 3] Wolf C. Multivariate quadratic polynomials in public key cryptography. Cryptology ePrint Archive. 2005.
---
---
Rebuttal 3:
Title: Fix the assignment table in Rebuttal A1
Comment: We apologize for posting the wrong table in Rebuttal A1, which is now given as follows
If you still have further concerns, or if you are not satisfied by the current responses, please let us know, so that we can update the response ASAP.
**A1** .... The results, as shown below, illustrate CryptoANFNet's effectiveness in producing assignments from the learned embeddings.
|Dataset|SR(5)|SR(25)||
|---|---|---|---|
NeuroSAT | **91.0%** | 5.0% |
CryptoANFNet | 71.0% | **35.0%** |
---
Rebuttal 4:
Comment: Dear Reviewer ZcPU,
We sincerely appreciate the time you have invested in the review process, as well as the valuable comments and constructive suggestions you have provided. In our rebuttal, we have summarized the main feedback and addressed the key issues, particularly concerning the novelty of our paper and the generalization of our method. However, we have not yet received any further feedback. As the discussion period draws close, we are eager to know whether our responses have satisfactorily addressed your concerns.
It appears there is still some confusion regarding whether our method can be generalized to higher-order ANF formulas, whether a proof can be provided for prediction, and the comparison with speed experiments. In light of this, we have carefully responded to the issues you mentioned and supplemented our experiments. We kindly ask if you are satisfied with our responses.
We hope that our rebuttal and discussion will clarify any potential misunderstandings and contribute to a more comprehensive evaluation of our submission.
Thank you very much for your time and attention. We sincerely appreciate your support and are more than willing to address any further concerns.
Best regards,
The Authors
---
Rebuttal Comment 4.1:
Comment: Thanks for the rebuttal and it addressed some of my concerns. I am convinced by the high-order ANF part. I also get it that if efficiency is critical in some scenarios, then having a faster SAT solver is important. However, for a fair comparison, I think the author should look at incomplete solvers, which are mostly based on local search, e.g., GSAT, WalkSAT, RoundingSAT, FourierSAT, etc. Those solvers can only give solutions but can not prove UNSAT and they are generally very fast. Again, I believe comparing a predictor with a complete solver is unfair.
Should the author integrate the rebuttal and discuss the incomplete solvers in the final version, I could consider raising my score.
---
Reply to Comment 4.1.1:
Comment: Thank you for your feedback and for acknowledging the points we addressed in our rebuttal, particularly regarding the high-order ANF and the importance of efficiency in certain scenarios.
We agree on the importance of including incomplete solvers in the evaluation. We will incorporate this consideration into the final version of our paper, comparing the predictor with incomplete solvers on our dataset. The new comparison results are shown in the table below. For GSAT, we performed up to 100 iterations.
Table: Comparing the efficiency of different solvers
(Average runtime: (SAT,UNSAT) ms/instance)
| Datasets | SR(5) | SR(25) | Simon 3-8-16 | Simon 3-16-32 | Simon 6-8-16 | Simon 6-16-32 | Speck 3-8-16 | Speck 6-8-16 |
| -------------------- | ------- | ------------- | ------------ | ------------- | ------------ | ------------- | ------------ | ------------ |
| NeuroSAT | (3,3) | (20,20) | (7,7) | (10,10) | (7,7) | (14,14) | (13,13) | (18,18) |
| **CryptoANFNet (our paper)** | (2,2) | (5,5) | (8,8) | (9,9) | (10,10) | (8,8) | (11,11) | (14,14) |
| WDSat | (36,34) | (2470,5662) | (38,38) | (39,39) | (40,37) | (86,150) | (65,72) | (2593,5060) |
| CryptoMiniSat | (4,4) | (13491,35912) | (4,4) | (7,9) | (8,9) | (410,1354) | (101,92) | (1671,3900) |
| Kissat | (2,2) | (4922,14856) | (2,2) | (2,2) | (5,8) | (219,464) | (40,35) | (1484,2919) |
| GSAT[cite 1] | (8,470) | (10951,10799) | (17,1392) | (2570,2425) | (1870,6970) | (14800,15139) | (281,2659) | (9473,12233) |
| WalkSAT[cite 2] | (3,640) | (762,744) | (4,6) | (10,12) | (289,26) | (831,899) | (39,480) | (482,538) |
| RoundingSAT [cite 3] | (3,3) | (36758,50122) | (3,5) |(7,10) |(28,20) | (664,1801) | (23,24) | (29,35) |
| FourierSAT [cite 4] | (1275,8670) | (9620,9687) | (983,426) |(1779,459) | (8163,416) | (8830,8862) | (8733,8689) | (8799,8912) |
[cite 1] https://github.com/Sina-Baharlou/GSAT-WalkSAT
[cite 2] https://gitlab.com/HenryKautz/Walksat
[cite 3] https://gitlab.com/miao_research/roundingsat
[cite 4] https://github.com/vardigroup/FourierSAT
We found that both incomplete solvers, like WalkSAT, and complete solvers, like Kissat, were significantly outperformed in speed by learning-based models such as CryptoANFNet and NeuroSAT when solving SAT instances derived from cryptographic problems. These results indicate that learning-based solvers like CryptoANFNet offer a significant advantage over traditional solvers in terms of application efficiency.
We hope that this addition will address your concerns and contribute to a more comprehensive and balanced evaluation of our work. We will include the results of more solvers in the final version.
Thank you again for your valuable suggestions, and we appreciate your willingness to reconsider your rating. | Summary: This paper introduces an approach to handling cryptographic problems by transforming them into Boolean Satisfiability (SAT) problems using a graph structure based on Arithmetic Normal Form (ANF) to efficiently manage XOR operations, which are prevalent in cryptography. It proposes CryptoANFNet, a graph learning framework that predicts plaintext-ciphertext satisfiability using a message-passing scheme. CryptoANFNet demonstrates superior scalability, achieving a 50x speedup over heuristic solvers and outperforming the state-of-the-art learning-based SAT solver NeuroSAT in terms of accuracy. Additionally, the paper presents a key-solving algorithm that simplifies ANF-based SAT instances, resulting in improved key decryption accuracy for datasets generated from Simon and Speck algorithms.
Strengths: * This paper formalize the ANF formula as the Multivariate Quadratic ( MQ ) problem that reduce the complexity of the problem
* The evaluation results demonstrate a significant acceleration on SMT solving. 50x speedup is surprisingly good.
Weaknesses: * The proposed approach of using graph neural network and message passing is less novel considering its close design compared to NeuroSAT.
* For performance evaluation and baseline selection, this paper doesn't compare itself to state-of-the-art approaches.
* This paper doesn't include popular encryption algorithms, such as AES, in its dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: * What is the key novelty in the model part when compared to the existing work, NeuroSAT? It seems like the key difference lies in the different message passing design.
* Does this paper select the-state-of-the-art works as its baselines? This paper considers some works that are in the form of competition, e.g., [42], while there are some related works, e.g., "Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach".
* For dataset with encryption algorithms, this paper generates data with SIMON and SPECK, which are lightweight block cipers and not really popular in real-world applications. Can this paper extend its scope to more popular and complex ciphers, such as AES?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comment and for recognizing the significance of our work in formalizing the ANF formula as a Multivariate Quadratic (MQ) problem. We appreciate your acknowledgment of how this approach reduces the complexity of the problem. We are pleased to address our contribution in detail.
---
We hope this response clarifies the importance of our work in reducing the complexity of cryptographic problems and significantly improving SAT solving efficiency. We believe this has the potential to open up new avenues for research and practical applications. We are committed to continuing this line of research and look forward to making further progress in this area.
---
#### W1&Q1 "What is the key novelty in the model part when compared to the existing work, NeuroSAT? It seems like the key difference lies in the different message passing design."
**A1** Thank you. CryptoANFNet is inspired by some excellent previous works, like NeuroSAT, but we do not consider our key novelty to be incremental. We will show our key novelty in the following aspects.
1. **Introduction of ANF Formulas**:
We introduced the input format of SAT instances, ANF (Algebraic Normal Form), for efficiently solving SAT instances in cryptographic problems. ANF formulas can represent SAT instances in cryptographic problems with fewer literals and clauses, whereas CNF formulas, as used in NeuroSAT, often require a larger number of literals and clauses to represent common cryptographic operations like XOR. In this way, ANF formulas are more efficient for representing cryptographic problems.
2. **ANF-Based Graph Representation**:
We proposed a new ANF-based graph representation to capture the operations in ANF, while NeuroSAT is based on a CNF-based graph representation (LCG). Since the high-order literals are used only as intermediate results in the message-passing process, the parameterized nodes in the ANF-based graph are less than in CNF-based graph.
As shown in Table 1 of our paper, our ANF graph is more efficient than the general graph form used in NeuroSAT. Here, we provide a summary table comparing ANF with two types of CNF-based graph representations, Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG). This demonstrates that our proposed ANF-based graph representation remains efficient.
| | Datasets | SR(5) | SR(25) | Simon 3-8-16 | Simon 3-16-32 | Simon 6-8-16 | Simon 6-16-32 | Speck 3-8-16 | Speck 6-8-16 |
| -------- | --------- | ----- | ------ | ------------ | ------------- | ------------ | ------------- | ------------ | ------------ |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(LCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 87 | 6340 | 245 | 501 | 833 | 1713 | 474 | 3316 |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(VCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 81 | 5916 | 220 | 452 | 784 | 1616 | 417 | 3133 |
| | #Literals | 5 | 25 | 24 | 48 | 48 | 96 | 56 | 128 |
| ANF | #Clauses | 11 | 26 | 24 | 48 | 48 | 96 | 64 | 136 |
| | #Nodes | 27 | 77 | 72 | 144 | 144 | 288 | 184 | 400 |
3. **Model Structure for ANF-Based Graph Representation**:
We proposed a model structure for handling ANF-based graph representation, which have better performance than NeuroSAT in MQ problem and can effectively extend to higher-order ANF formulas.
Unlike NeuroSAT, which retains embeddings for all literal nodes that may appear in clauses, the approach for ANF formulas presents a different challenge. Due to the high-order literals in ANF formulas, the number of literal types increases exponentially with the order and retaining embeddings for all possible literals would incur unacceptable costs. Therefore, unlike NeuroSAT, as stated in line 216 of Section 4.3, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. High-order nodes are used only as intermediate results in the message-passing process. For high-order ANF formulas, i.e., AND terms with more than 2 variables, we have the similar message-passing process shown below:
- For the message-passing process from vanilla literal node to clause node, we have:
$L^{(t)}_{l2c} = L\_{l2l}(M^T\_{l2l}L^{(t)})$
$[C^{(t)}\_{m,\text{pos}}, C^{(t)}\_{m,\text{neg}}] = M_{l2c}^T L_{\text{msg}}(L^{(t)}_{l2c})$
- For the message-passing process from clause node to vanilla literal node, we have:
$L^{(t)}\_{c2l} = M\_{l2c} C\_{\text{msg}}([C^{(t)}\_{\text{pos}}, C^{(t)}\_{\text{neg}}])$
$L^{(t)}\_m = M\_{l2l} L\_{l2m}(L^{(t)}\_{c2l})$
where $M_{l2l}$ is a **sparse** adjacency matrix defined by $M_{l2l}(i, j) = 1$(the i-th vanilla literal appears in the j-th high-order literal) and $M_{l2c}$ is the adjacency matrix defined by $M_{l2c}(i, j) =1$(the $i$-th high-order literal appears in the $j$-th clause).
Using the sparse matrix $M_{l2l}$, we only need to retain the edges between vanilla literal nodes and higher-order nodes, without storing the entire adjacency matrix, which grows exponentially with higher orders. This approach effectively handles datasets comprising SAT instances with literals in various orders.
---
Rebuttal 2:
Comment: Note that the number of parameters in CryptoANFNet is independent of the number of nodes in the ANF-based graph generated from a SAT instance. Considering the intermediate results stored by CryptoANFNet, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. During each iteration, CryptoANFNet simultaneously computes intermediate result vectors for up to the number of distinct literals in the SAT instance, which do not grow exponentially with higher orders. Thus, our approach can manage SAT instances with a greater variety of literals. Here, we show the results on high-order ANF formulas, i.e., AND terms with more than 2 variables:
|Dataset|SR(25)|SR(5)-high|SR(10)-high|
|---|---|---|---|
NeuroSAT | 57.0% | 53.5% | 51.0%
CryptoANFNet | **71.5%** | **71.0%** | **63.5%**
where SAT instances in **SR(5)-high** and **SR(10)-high** contain clauses with 1-5 order ANF literals, generated in the same way as the **SR(n)** dataset. It can be seen that the extended CryptoANFNet still outperforms NeuroSAT on second-order ANF formulas dataset **SR(25)** and better performance on higher-order ANF formulas.
4. **Key solving and satisfying assignment prediction**:
As detailed in Section 4.4 of our paper, we propose a key solving algorithm to train CryptoANFNet to sequentially determine the value of each specific bit of the key in a plaintext-ciphertext problem. The results is shown in Table 3 of our paper.
Additionally, CryptoANFNet can also directly extract satisfying assignments of the key from the embeddings of the literals. Here, we present the results of CryptoANFNet on the task of satisfying assignments, which illustrate CryptoANFNet’s effectiveness in producing assignments from the learned embeddings.
| Dataset | SR(5) | SR(25) | |
| ------- | ----- | ------ | --- |
NeuroSAT | **91.0%** | 5.0% |
CryptoANFNet | 71.0% | **35.0%** |
---
#### W2&Q2 "Does this paper select the-state-of-the-art works as its baselines? ... e.g., [42], while there are some related works, e.g., "Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach"."
**A2** Thank you for your comment. We referred to the benchmark G4satbench [cite 1] to select the state-of-the-art works as the baseline. In G4satbench, NeuroSAT demonstrates superior performance across most datasets compared to other models. Therefore, we chose NeuroSAT as the state-of-the-art reference in our paper. Furthermore, we compared CryptoANFNet with various baselines, such as DG-DAGRNN (Circuit-SAT [cite 2]) and other baselines in G4satbench, and the following table shows that CryptoANFNet exhibits better performance on different datasets.
| Datasets | SR(5) | SR(25) | Simon-3-8-16 | Simon-3-16-32 | Simon-6-8-16 | Simon-6-16-32 | Speck-3-8-16 | Speck-6-8-16 |
| ---------------------- | ----- | ------ | ------------ | ------------- | ------------ | ------------- | ------------ | ------------ |
|GCN(LCG)|91.0%|53.5%|73.2%|74.0%|53.5%|52.0%|53.5%|51.5%
|GCN(VCG)|90.5%|52.%|75.2%|73.5%|52.0%|53.0%|53.0%|52.0%
|GIN(LCG)|91.5%|51.5%|76.2%|74.0%|52.5%|51.5%|53.5%|52.0%
|GIN(VCG)|88.0%|52.5%|75.0%|74.0%|54.5%|52.5%|54.5%|52.5%
|GGNN(LCG)|91.0%|54.0%|76.5%|74.0%|54.0%|53.0%|53.0%|52.5%
|GGNN(VCG)|89.0%|56.0%|76.2%|74.4%|53.5%|52.3%|52.5%|51.5%
|NeuroSAT|91.0%|57.0%|74.0%|72.7%|53.0%|51.0%|55.0%|52.5%
|DG−DAGRNN(Circuit-SAT[cite 2])|84.0%|50.5%|73.0%|52.0%|51.0%|50.5%| 50.5%|51.5%
|CryptoANFNet|**96.0%**|**72.0%**|**76.5%**|**75.6%**|**69.0%**|**66.5%**|**72.0%**|**68.5%**
[cite 1] Li Z, Guo J, Si X. G4satbench: Benchmarking and advancing sat solving with graph neural networks[J]. arXiv preprint arXiv:2309.16941, 2023.
[cite 2] Amizadeh S, Matusevych S, Weimer M. Learning to solve circuit-sat: An unsupervised differentiable approach[C]//International Conference on Learning Representations. 2019.
---
---
Rebuttal 3:
Comment: #### W3&Q3 "For dataset with encryption algorithms, this paper generates data with SIMON and SPECK, which are lightweight block cipers and not really popular in real-world applications. Can this paper extend its scope to more popular and complex ciphers, such as AES?"
**A3** Thank you for your comment. Theoretically, the methods proposed in this paper can be extended to more complex block ciphers that can be represented by ANF formulas. However, in practice, the efficiency of traditional solvers limits the generation of datasets for these complex block ciphers. Some block cipher structures (such as the use of S-boxes) lack efficient algorithms for conversion to ANF or CNF formulas. This results in a large number of clauses and literals, making it challenging to generate efficient datasets for the network to train on. Below, we use the AES encryption algorithm to illustrate this issue in detail.
1. **Theoretical Conversion to Boolean Formulas**:
For each round of AES encryption, there are four main operations: SubBytes, ShiftRows, MixColumns, and AddRoundKey, which can theoretically be represented as Boolean formulas and thus converted into ANF and CNF forms of SAT instances.
- **SubBytes**: This is a nonlinear operation where each byte is replaced using an S-box. Despite its nonlinearity, we can convert this step into a Boolean formula as follows:
Suppose the input byte is $X = x_7x_6x_5x_4x_3x_2x_1x_0$, and the output byte is $Y = y_7y_6y_5y_4y_3y_2y_1y_0$. For each possible input value (0 to 255), list the corresponding output value and express it as a Boolean formula.
For example, if $X = 00000000$ (i.e., 0x00) and the S-box output is $Y = 01100011$(i.e., 0x63), we can express this as:
$\neg x_7 \land \neg x_6 \land \neg x_5 \land \neg x_4 \land \neg x_3 \land \neg x_2 \land \neg x_1 \land \neg x_0 \rightarrow$ $(y_7 \land y_6 \land \neg y_5 \land \neg y_4 \land y_3 \land \neg y_2 \land y_1 \land y_0)$
By such conversion, this step can be transformed into Boolean formulas.
- **ShiftRows**: This is a row-shifting operation that can be converted to Boolean formulas as described in Section 3 of the paper.
- **MixColumns**: This is a linear transformation involving matrix multiplication over a finite field, which can be described using Boolean formulas.
- **AddRoundKey**: This is an XOR operation, which can be converted to Boolean formulas as described in Section 3 of the paper.
2. **Practical Challenges**:
We used the AES encryption algorithm's corresponding SAT instance generator as described in [cite 1] to generate datasets in ANF and CNF forms. For a 10-round 128-bit AES encryption, we generated SAT instances with nearly 8000 literals and 160,000 clauses. The scale of these instances makes it difficult for traditional solvers to verify their SAT or UNSAT status, thereby limiting the creation of a valid dataset. To validate this, we tested the solving speed of SAT instances generated by a 1-round 128-bit AES encryption algorithm on the traditional solver Cryptominisat [cite 2]. It took over 2 hours to solve a single instance. This speed is unacceptable for generating training datasets.
Furthermore, for both NeuroSAT and our proposed CryptoANFNet, the graph representations of such large-scale SAT instances exceed the capacity of current machines. Therefore, in practice, we cannot test our methods on datasets generated from AES encryption.
[cite 1] https://github.com/meelgroup/aes-cnf-gen
[cite 2] https://github.com/msoos/cryptominisat
In conclusion, while our methods have theoretical applicability to complex block ciphers, practical limitations in traditional solvers' efficiency and dataset generation hinder their current implementation on such large-scale encryption algorithms.
---
---
Rebuttal 4:
Comment: Dear Reviewer 6xjG,
We sincerely appreciate the time you have invested in the review process, as well as the valuable comments and constructive suggestions you have provided. In our rebuttal, we have summarized the main feedback and addressed the key issues, particularly concerning the novelty of our paper and the generalization of our method. However, we have not yet received any further feedback. As the discussion period is nearing its end, we are eager to know whether our responses have satisfactorily addressed your concerns.
There still seems to be some confusion regarding the key novelty of CryptoANFNet in the model part, how we selected the baseline, and whether our method can be generalized to other encryption algorithms. In light of this, we have carefully addressed the issues you raised and supplemented our experiments. We kindly ask if you are satisfied with our responses.
We hope that our rebuttal and discussion will clarify any potential misunderstandings and contribute to a more comprehensive evaluation of our submission.
Thank you very much for your time and attention. We sincerely appreciate your support and are more than willing to address any further concerns.
Best regards,
The Authors | Summary: The paper strikes an interesting endeavor for introducing machine learning to address the Plaintext-Ciphertext Cryptographic Problems, which is to my best knowledge, new in literature. It lies in the between of AI and security and specifically crypto which goes beyond the line of research either in learning for combinatorics. I think this paper pushes the frontier of AI for discrete math. Overall I think the paper is interesting and worth publishing to the community to briduge the two communities: machine learning and crypto.
Strengths: 1) the paper is well written which clearly introduces the background, the problem setting and the preliminaries. It is quite informative and could inspire the readers.
2) specifically, the work introduces machine learning for plaintext-ciphertext satisfiability prediction. It on one hand outpeforms traditional heuristic methods by speed, and on the other hand outperforms peer learning-based NeuroSAT by accuracy.
3) The authors further show extending their approach to the key decryption problem with a notable performance boost, compared with the one without our devised simplification approach.
4) the experiments are themselves novel and well designed. The results are comprehensive and informative, which I think would inspire the future work in this emerging area.
Weaknesses: 1) the technical part is a bit simple from the top ML conference perspective, while of course this is often the case for machine learning applications to emerging areas
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) how the presented techniques could impact the area of machine learning for crypto, in a more broad sense?
2) Is there any impact or concerns to the world when the presented approach attains more success? What is the fundamental challenge for this line of research?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comment and for recognizing its innovative aspects and experimental design. Your questions about the conclusions and insights offered by this paper are important and meaningful. We greatly appreciate the opportunity to elaborate on our contributions and address the points you've raised.
---
We sincerely appreciate your insightful comments on our work. We fully agree that this research has the potential to foster collaboration and knowledge exchange between the machine learning and cryptography communities. Thank you again for your valuable comments and for recognizing the broad impact of our contributions. We are committed to continuing this line of research and look forward to seeing how it impacts future developments in AI-assisted cryptography and beyond.
---
#### W1 the technical part is a bit simple from the top ML conference perspective, while of course this is often the case for machine learning applications to emerging areas
**A1** Thank you for your comment. We will address the technically non-trivial aspects of our work from the following aspects:
1. **Introduction of ANF Formulas**:
We introduced the input format of SAT instances, ANF (Algebraic Normal Form), for efficiently solving SAT instances in cryptographic problems. ANF formulas can represent SAT instances in cryptographic problems with fewer literals and clauses, whereas CNF formulas, as used in NeuroSAT, often require a larger number of literals and clauses to represent common cryptographic operations like XOR. In this way, ANF formulas are more efficient for representing cryptographic problems.
2. **ANF-Based Graph Representation**:
We proposed a new ANF-based graph representation to capture the operations in ANF, while NeuroSAT is based on a CNF-based graph representation (LCG). Since the high-order literals are used only as intermediate results in the message-passing process, the parameterized nodes in the ANF-based graph are less than in CNF-based graph.
As shown in Table 1 of our paper, our ANF graph is more efficient than the general graph form used in NeuroSAT. Here, we provide a summary table comparing ANF with two types of CNF-based graph representations, Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG). This demonstrates that our proposed ANF-based graph representation remains efficient.
| | Datasets | SR(5) | SR(25) | Simon 3-8-16 | Simon 3-16-32 | Simon 6-8-16 | Simon 6-16-32 | Speck 3-8-16 | Speck 6-8-16 |
| -------- | --------- | ----- | ------ | ------------ | ------------- | ------------ | ------------- | ------------ | ------------ |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(LCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 87 | 6340 | 245 | 501 | 833 | 1713 | 474 | 3316 |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(VCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 81 | 5916 | 220 | 452 | 784 | 1616 | 417 | 3133 |
| | #Literals | 5 | 25 | 24 | 48 | 48 | 96 | 56 | 128 |
| ANF | #Clauses | 11 | 26 | 24 | 48 | 48 | 96 | 64 | 136 |
| | #Nodes | 27 | 77 | 72 | 144 | 144 | 288 | 184 | 400 |
3. **Model Structure for ANF-Based Graph Representation**:
We proposed a model structure for handling ANF-based graph representation, which have better performance than NeuroSAT in MQ problem and can effectively extend to higher-order ANF formulas.
Due to the high-order literals in ANF formulas, the number of literal types increases exponentially with the order. Therefore, as stated in line 216 of Section 4.3, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. High-order nodes are used only as intermediate results in the message-passing process. For high-order ANF formulas, i.e., AND terms with more than 2 variables, we have the similar message-passing process shown below:
- For the message-passing process from vanilla literal node to clause node, we have:
$L^{(t)}_{l2c} = L\_{l2l}(M^T\_{l2l}L^{(t)})$
$[C^{(t)}\_{m,\text{pos}}, C^{(t)}\_{m,\text{neg}}] = M_{l2c}^T L_{\text{msg}}(L^{(t)}_{l2c})$
- For the message-passing process from clause node to vanilla literal node, we have:
$L^{(t)}\_{c2l} = M\_{l2c} C\_{\text{msg}}([C^{(t)}\_{\text{pos}}, C^{(t)}\_{\text{neg}}])$
$L^{(t)}\_m = M\_{l2l} L\_{l2m}(L^{(t)}\_{c2l})$
where $M_{l2l}$ is a **sparse** adjacency matrix defined by $M_{l2l}(i, j) = 1$(the i-th vanilla literal appears in the j-th high-order literal) and $M_{l2c}$ is the adjacency matrix defined by $M_{l2c}(i, j) =1$(the $i$-th high-order literal appears in the $j$-th clause).
Using the sparse matrix $M_{l2l}$, we only need to retain the edges between vanilla literal nodes and higher-order nodes, without storing the entire adjacency matrix, which grows exponentially with higher orders. This approach effectively handles datasets comprising SAT instances with literals in various orders.
---
Rebuttal 2:
Comment: Here, we show the results on high-order ANF formulas, i.e., AND terms with more than 2 variables:
|Dataset|SR(25)|SR(5)-high|SR(10)-high|
|---|---|---|---|
NeuroSAT | 57.0% | 53.5% | 51.0%
CryptoANFNet | **71.5%** | **71.0%** | **63.5%**
where SAT instances in **SR(5)-high** and **SR(10)-high** contain clauses with 1-5 order ANF literals, generated in the same way as the **SR(n)** dataset. It can be seen that the extended CryptoANFNet still outperforms NeuroSAT on second-order ANF formulas dataset **SR(25)** and better performance on higher-order ANF formulas.
4. **Key solving and satisfying assignment prediction**:
As detailed in Section 4.4 of our paper, we propose a key solving algorithm to train CryptoANFNet to sequentially determine the value of each specific bit of the key in a plaintext-ciphertext problem. The results is shown in Table 3 of our paper.
Additionally, CryptoANFNet can also directly extract satisfying assignments of the key from the embeddings of the literals. Here, we present the results of CryptoANFNet on the task of satisfying assignments, which illustrate CryptoANFNet’s effectiveness in producing assignments from the learned embeddings.
| Dataset | SR(5) | SR(25) | |
| ------- | ----- | ------ | --- |
NeuroSAT | **91.0%** | 5.0% |
CryptoANFNet | 71.0% | **35.0%** |
---
---
Rebuttal 3:
Comment: #### Q1 "how the presented techniques could impact the area of machine learning for crypto, in a more broad sense?"
**A2** Thank you for your comment. We are excited to discuss the broader impact and future directions of our work in the area of machine learning for crypto.
1. **Advancing ML-assisted Cryptographic Algorithm Design**: Understanding how CryptoANFNet solves cryptographic instances can inspire the development of new, more robust encryption algorithms. By analyzing the methods and efficiencies of CryptoANFNet, researchers can design encryption algorithms that are more resistant to such advanced SAT-solving techniques, leading to more targeted and secure cryptographic solutions.
2. **Development of ML-assisted SAT Solvers for Cryptographic Analysis**: Our work proposes the use of machine learning-assisted SAT solvers, which paves the way for the development of specialized hardware accelerators designed for ML-assisted cryptographic analysis. These accelerators could significantly enhance the speed and efficiency of cryptographic problem-solving, making advanced cryptographic analysis more accessible and practical.
3. **Promoting ML-based Automated Cryptographic Analysis Tools**: The integration of machine learning methods into cryptographic analysis can lead to the development of more automated and intelligent cryptographic tools. These tools could reduce the need for manual analysis, accelerating cryptographic research and allowing researchers to focus on more complex and innovative problems. By leveraging ML techniques, we can create smarter tools that enhance the overall efficiency and effectiveness of cryptographic analysis.
---
---
Rebuttal 4:
Comment: #### Q2 "Is there any impact or concerns to the world when the presented approach attains more success? What is the fundamental challenge for this line of research?"
**A3** Thank you for your comment. We appreciate the opportunity to discuss the impact and concerns related to our work on the broader field of cryptographic analysis and algorithm design, as follows:
1. **Accelerating Cryptographic Analysis**: By providing faster and more efficient solutions to SAT instances derived from cryptographic problems, our work can significantly expedite cryptographic analysis. This acceleration could potentially render previously computationally infeasible attacks feasible, thereby advancing the field of cryptography.
2. **Advancing Cryptographic Algorithm Design and Improvement**: Machine learning-based SAT solving in cryptographic analysis can be beneficial in multiple areas beyond direct ANF-to-SAT problem solving. For instance, it could aid in breaking post-quantum cryptographic problems, driving the evolution and enhancement of cryptographic algorithm design.
However, this research faces two primary challenges:
1. **Exponential Growth in Computational Complexity with Increased Rounds**: As the number of encryption rounds increases, the computational effort required for decryption typically grows exponentially. Moreover, increased rounds enhance the diffusion property of cryptographic algorithms, where small changes in the input result in significant variations in the output. In our cryptographic instance generation, more rounds mean each bit of the key will be related to more intermediate output bits, increasing the constraints for each key bit and thereby complicating the key-solving process.
2. **Difficulty in Representing Certain Encryption Algorithms as SAT Instances**: Some encryption algorithms, particularly those using S-boxes (e.g., AES, DES) or variable key lengths (e.g., Blowfish), are challenging to efficiently represent as SAT instances. For instance, enumerating the byte mappings within an S-box requires a substantial number of clauses and literals, greatly increasing the solving difficulty.
Overall, this line of research not only aims to push the boundaries of cryptographic analysis through improved SAT-solving techniques but also highlights the potential and challenges of integrating machine learning methods in cryptographic research.
---
---
Rebuttal 5:
Comment: Dear Reviewer uZGu,
We sincerely appreciate the time you have invested in the review process, as well as the valuable comments and constructive suggestions you have provided. In our rebuttal, we have summarized the main feedback and addressed the key issues, particularly regarding the technical novelty of our paper and its impact on the field. However, we have not yet received any further feedback. As the discussion period is nearing its end, we are eager to know whether our responses have satisfactorily addressed your concerns.
We appreciate that you acknowledged the strengths of our approach, analysis, and experiments. Additionally, we have elaborated on the technical novelties of our work and supplemented it with relevant experiments. Furthermore, we have discussed the impact of our method on the field and its future development. We kindly ask if you are satisfied with our responses.
We hope that our rebuttal and discussion will clarify any potential misunderstandings and contribute to a more comprehensive evaluation of our submission.
Thank you very much for your time and attention. We sincerely appreciate your support and are more than willing to address any further concerns.
Best regards,
The Authors | Summary: In this manuscript, the authors propose an ANF-based graph structure that efficiently handles XOR and AND operations in cryptographic problems. Additionally, they introduce CryptoANFNet, a message-passing neural network model designed to predict the satisfiability of cryptographic problems. The authors also present a key-solving algorithm that enhances key decryption accuracy.
Strengths: 1. This work introduces efficient representations and learning strategies for encryption problems, which are highly innovative.
2. In this work, CryptoANFNet excels in predicting satisfiability and key decryption, being up to 50 times faster and more accurate than existing methods.
Weaknesses: 1. In the “Introduction” of the article, the author mentioned that the application of ANF to existing SAT solvers is not as straightforward as CNF, but the article does not explicitly propose a solution that corresponds to it.
2. Table 1 listed the parameters of SAT problems in CNF and ANF, but the article did not provide the algorithmic level explanations of these results.
3. The author proposed CryptoANFNet and compared it with NeuroSAT. The results of the tests on different datasets were also listed in Table 2. However, the article lacks further comparison of the two algorithms.
4. This article lacks a detailed description of the specific steps of the key-solving algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Discussed the limitations in Appendix B
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for recognizing the innovative aspects of our work. We appreciate your positive assessment of our paper's contribution.Your feedback highlights the key strengths of our research, particularly the introduction of efficient representations and learning strategies for encryption problems. We are grateful for the opportunity to elaborate on our contributions and address any questions you have.
---
We sincerely appreciate your time and effort in reviewing our manuscript. We are committed to furthering this line of inquiry and contributing to the development of more efficient and effective cryptographic tools. We would sincerely appreciate it if you could reconsider your rating and we are more than happy to address any further concerns you may have.
---
#### Q1 "In the “Introduction” of the article, the author mentioned that the application of ANF to existing SAT solvers is not as straightforward as CNF, but the article does not explicitly propose a solution that corresponds to it."
**A1** Thank you for your comment. In line 54, we mention that “However, representing ANF as a graph for efficient learning is challenging, and applying it to existing SAT solvers is not as straightforward as CNF” for the following reasons:
1. **Input Format Compatibility**: Existing traditional or ML-based SAT solvers predominantly accept CNF (Conjunctive Normal Form) as their input format. When dealing with cryptographic problems formulated in ANF (Algebraic Normal Form), it is necessary to convert ANF to CNF before inputting it into these solvers. This conversion process adds an extra step, making the application of ANF less straightforward compared to CNF.
2. **Lack of ANF-based Graph Structures**: Unlike CNF, which has established graph representations such as Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG) used in learning-based SAT solvers, ANF lacks similar graph structures. These graph representations are crucial for learning-based solvers as they facilitate efficient learning and problem-solving. The absence of well-defined ANF-based graph structures poses a significant challenge for applying learning-based approaches to ANF-formulated problems. This gap highlights the need for developing novel ANF-based graph structures to enable efficient learning and application in SAT solvers.
To address this challenge and enhance SAT practice within ANF, we propose a novel approach using an ANF-based graph structure to represent SAT instances relevant to cryptographic applications. This approach led to the development of CryptoANFNet, an ML-based SAT solver designed to directly predict the satisfiability of SAT instances in the ANF input format. By leveraging this method, we can effectively handle ANF-based SAT problems without the need for conversion to CNF, thereby streamlining the process and improving efficiency.
---
#### Q2 "Table 1 listed the parameters of SAT problems in CNF and ANF, but the article did not provide the algorithmic level explanations of these results."
**A2** Thank you for you question. We will provide a further explanation of Table 1 in two steps:
1. **Literal and Clause Counts in ANF and CNF:**
- As stated in Section 3, Algebraic Normal Form(ANF) consist of Boolean equations, representing the conjunction of logical formulas in SAT instances. Each variable is referred to as a (vanilla) literal. In a Boolean equation, the right side is always 0 and the Boolean function on the left side, formed by the XOR connection of monomials, is called a clause. Each monomial is either a constant term 1, a (vanilla) literal, or a product of variables(literals).
- As stated in line 45, Conjunctive normal form(CNF) is a conjunction (and-ing) of clauses, with each clause consisting of adisjunction (or-ing) of the true and negated forms of literals.
- For the same cryptographic problem's SAT instance, let $n_a$ and $m_a$ denote the number of literals and clauses in ANF format, respectively, and $n_c$ and $m_c$ denote the number of literals and clauses in CNF format. Due to the different algebraic structure, CNF requires more literals and clauses to represent XOR operations in cryptographic problems. The rows corresponding to #literal and #clause in Table 1 reflect the number of literals and clauses in ANF and CNF formats, respectively.
---
Rebuttal 2:
Comment: 2. **Parameterized Node Count for Graph Representations:**
- In the ANF-based graph, there are first-order literal nodes, positive/negative clause nodes, and unparameterized second-order nodes like $x_1x_2$. Literal nodes do not distinguish between positive and negative. Each clause with the same sets of literals in an ANF formula is represented by two nodes (positive and negative), indicating the constant term taking 0 or 1.
- In the CNF-based graph, like Literal-Clause Graph (LCG) representation used in NeuroSAT, there are clause nodes and positive/negative literal nodes, with each literal represented by two nodes indicating its true and negated forms $x$ and $\bar{x}$.
- As mentioned in line 216 of Section 4.3, CryptoANFNet retains embeddings only for first-order nodes (vanilla literals) and clause nodes. High-order nodes are used only as intermediate results in the message-passing process. In comparison, NeuroSAT retains embeddings for all nodes. Consequently, the number of parameterized nodes requiring embeddings in CNF-based LCG representation is #Node = 2×#literal+#clause, whereas in the ANF graph representation, it is #Node = #literal+2×#clause.
This details clarifies the differences between ANF-based and CNF-based (LCG) representations and their implications on the number of literals/clauses in format, and nodes requiring embeddings, as showed in Table 1.
We will further elaborate by providing a summary table below, which compares ANF with two types of CNF-based graph representations, Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG). In VCG, literal nodes are not distinguished as positive or negative. The number of parameterized nodes in CNF-based VCG representation is #Node = #literal+#clause.
| | Datasets | SR(5) | SR(25) | Simon 3-8-16 | Simon 3-16-32 | Simon 6-8-16 | Simon 6-16-32 | Speck 3-8-16 | Speck 6-8-16 |
| ---| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(LCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 87 | 6340 | 245 | 501 | 833 | 1713 | 474 | 3316 |
| | #Literals | 6 | 424 | 25 | 49 | 49 | 97 | 57 | 183 |
| CNF(VCG) | #Clauses | 75 | 5492 | 195 | 403 | 735 | 1519 | 360 | 2950 |
| | #Nodes | 81 | 5916 | 220 | 452 | 784 | 1616 | 417 | 3133 |
| | #Literals | 5 | 25 | 24 | 48 | 48 | 96 | 56 | 128 |
| ANF | #Clauses | 11 | 26 | 24 | 48 | 48 | 96 | 64 | 136 |
| | #Nodes | 27 | 77 | 72 | 144 | 144 | 288 | 184 | 400 |
---
#### Q3 "The author proposed CryptoANFNet and compared it with NeuroSAT. The results of the tests on different datasets were also listed in Table 2. However, the article lacks further comparison of the two algorithms."
**A3** Thank you. We will give a detailed comparison between NeuroSAT and CryptoANFNet from the following aspects.
1. **Input Formats for SAT Instances**:
CryptoANFNet introduces ANF as the input format for SAT instances, while NeuroSAT uses CNF. ANF formulas can represent SAT instances in cryptographic problems with fewer literals and clauses, whereas CNF formulas, as used in NeuroSAT, often require a larger number of literals and clauses to represent common cryptographic operations like XOR. Thus, ANF formulas are more efficient for representing cryptographic problems.
2. **Graph Representations for Inputs**:
CryptoANFNet utilizes an ANF-based graph representation, as proposed in our paper, while NeuroSAT relies on a CNF-based graph representation (LCG). Since CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes, as shown in Table 1 of our paper, the ANF-based graph representation in CryptoANFNet is more efficient than the general graph form for SAT in NeuroSAT. To elaborate further, we provide a summary table in A2 that compares ANF with two types of CNF-based graph representations, Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG). This comparison demonstrates that our proposed ANF-based graph representation remains efficient.
---
Rebuttal 3:
Comment: 3. **Performance**:
As shown in Table 2 of our paper, CryptoANFNet outperforms NeuroSAT across different datasets. Additionally, we referred to the benchmark G4satbench [cite 1] to compare CryptoANFNet with various baselines, and the following table show that CryptoANFNet exhibits better performance on different datasets.
| Datasets | SR(5) | SR(25) | Simon-3-8-16 | Simon-3-16-32 | Simon-6-8-16 | Simon-6-16-32 | Speck-3-8-16 | Speck-6-8-16 |
| ---------------------- | ----- | ------ | ------------ | ------------- | ------------ | ------------- | ------------ | ------------ |
|GCN(LCG)|91.0%|53.5%|73.2%|74.0%|53.5%|52.0%|53.5%|51.5%
|GCN(VCG)|90.5%|52.%|75.2%|73.5%|52.0%|53.0%|53.0%|52.0%
|GIN(LCG)|91.5%|51.5%|76.2%|74.0%|52.5%|51.5%|53.5%|52.0%
|GIN(VCG)|88.0%|52.5%|75.0%|74.0%|54.5%|52.5%|54.5%|52.5%
|GGNN(LCG)|91.0%|54.0%|76.5%|74.0%|54.0%|53.0%|53.0%|52.5%
|GGNN(VCG)|89.0%|56.0%|76.2%|74.4%|53.5%|52.3%|52.5%|51.5%
|NeuroSAT|91.0%|57.0%|74.0%|72.7%|53.0%|51.0%|55.0%|52.5%
|DG−DAGRNN(Circuit-SAT[cite 2])|84.0%|50.5%|73.0%|52.0%|51.0%|50.5%| 50.5%|51.5%
|CryptoANFNet|**96.0%**|**72.0%**|**76.5%**|**75.6%**|**69.0%**|**66.5%**|**72.0%**|**68.5%**
[cite 1] Li Z, Guo J, Si X. G4satbench: Benchmarking and advancing sat solving with graph neural networks[J]. arXiv preprint arXiv:2309.16941, 2023
[cite 2] Amizadeh S, Matusevych S, Weimer M. Learning to solve circuit-sat: An unsupervised differentiable approach[C]//International Conference on Learning Representations. 2019.
4. **Model Structures**:
As described in Section 4.3 of our paper, ANF clauses involve both AND and OR operations. When constructing the ANF graph for CryptoANFNet, there are first-order nodes, positive and negative clause nodes, and high-order nodes like $x_1x_2$. Messages needs to be passed between clause nodes and high-order nodes, as well as between high-order nodes and first-order nodes (vanilla literals). In contrast, when constructing the CNF graph for NeuroSAT, the LCG representation involves clause nodes and positive and negative literal nodes, with information passing only between clause and literal nodes. To do the message-passing process efficiently, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. As shown in the table in **A2**, it requires fewer embedding parameters, making the model more efficient.
5. **Adaptability to SAT instances in high-order ANF Formulas**:
Since high-order nodes are used only as intermediate results in the message-passing process, CryptoANFNet can be adapted to high-order ANF formulas. For high-order ANF formulas, i.e., AND terms with more than 2 variables, we have the similar message-passing process shown below:
- For the message-passing process from vanilla literal node to clause node, we have:
$L^{(t)}_{l2c} = L\_{l2l}(M^T\_{l2l}L^{(t)})$
$[C^{(t)}\_{m,\text{pos}}, C^{(t)}\_{m,\text{neg}}] = M_{l2c}^T L_{\text{msg}}(L^{(t)}_{l2c})$
- For the message-passing process from clause node to vanilla literal node, we have:
$L^{(t)}\_{c2l} = M\_{l2c} C\_{\text{msg}}([C^{(t)}\_{\text{pos}}, C^{(t)}\_{\text{neg}}])$
$L^{(t)}\_m = M\_{l2l} L\_{l2m}(L^{(t)}\_{c2l})$
where $M_{l2l}$ is a **sparse** adjacency matrix defined by $M_{l2l}(i, j) = 1$(the i-th vanilla literal appears in the j-th high-order literal) and $M_{l2c}$ is the adjacency matrix defined by $M_{l2c}(i, j) =1$(the $i$-th high-order literal appears in the $j$-th clause).
Using the sparse matrix $M_{l2l}$, we only need to retain the edges between vanilla literal nodes and higher-order nodes, without storing the entire adjacency matrix, which grows exponentially with higher orders. This approach effectively handles datasets comprising SAT instances with literals in various orders.
Note that the number of parameters in CryptoANFNet is independent of the number of nodes in the ANF-based graph generated from a SAT instance. Considering the intermediate results stored by CryptoANFNet, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes. During each iteration, CryptoANFNet simultaneously computes intermediate result vectors for up to the number of distinct literals in the SAT instance, which do not grow exponentially with higher orders. Thus, our approach can manage SAT instances with a greater variety of literals.
---
Rebuttal 4:
Comment: Here, we show the results on high-order ANF formulas, i.e., AND terms with more than 2 variables:
|Dataset|SR(25)|SR(5)-high|SR(10)-high|
|---|---|---|---|
NeuroSAT | 57.0% | 53.5% | 51.0%
CryptoANFNet | **71.5%** | **71.0%** | **63.5%**
where SAT instances in **SR(5)-high** and **SR(10)-high** contain clauses with 1-5 order ANF literals, generated in the same way as the **SR(n)** dataset. It can be seen that the extended CryptoANFNet still outperforms NeuroSAT on second-order ANF formulas dataset **SR(25)** and better performance on higher-order ANF formulas.
---
#### Q4 "This article lacks a detailed description of the specific steps of the key-solving algorithm."
**A4** Thank you. Let us consider a native instance under the Simon encryption algorithm with a 4-bit key $K=\overline{k_3k_2k_1k_0}$ and plaintext-ciphertext pair $X$ and $Y$. We will follow the three-step process described in line 266 of Section 4.4 to determine the value of $k_0$ in the key $K$.
1. **Initial ANF Formulation:**
Based on the encryption process and the given plaintext-ciphertext pair, we derive the initial ANF formula $x$.
2. **Guessing the specific bit $k_0$:**
We hypothesize $k_3 = 1$ and $k_3 = 0$ separately. By substituting 1/0 into the initial ANF formula $x$ respectively, we reduce the number of variables by one, resulting in two derived SAT instances $x_{k_0=1}$ and $x_{k_0=0}$.
3. **Solving with CryptoANFNet:**
Finally, we input $x_{k_0=1}$ and $x_{k_0=0}$ into CryptoANFNet, which outputs two satisfiability scores $s_{k_0=1}$ and $s_{k_0=0}$. If $s_{k_0=1} > s_{k_0=0}$, we determine that $k_0 = 1$; otherwise, $k_0 = 0$.
By following this key-solving algorithm, CryptoANFNet efficiently assists in determining the value of each individual key bit in cryptographic problems.
---
---
Rebuttal 5:
Comment: Dear Reviewer imRj,
We sincerely appreciate the time you have invested in the review process, as well as the valuable comments and constructive suggestions you have provided. In our rebuttal, we have summarized the main feedback and addressed the key issues, particularly concerning the novelty of our paper and the generalization of our method. However, we have not yet received any further feedback. As the discussion period is nearing its end, we are eager to know whether our responses have satisfactorily addressed your concerns.
There still seems to be some confusion regarding the introduction of ANF, the comparison of parameters between SAT problems in CNF and ANF, further details on the Key-solving algorithm, and the comparison between CryptoANFNet and NeuroSAT. In light of this, we have carefully addressed the issues you raised and supplemented our experiments. We kindly ask if you are satisfied with our responses.
We hope that our rebuttal and discussion will clarify any potential misunderstandings and contribute to a more comprehensive evaluation of our submission.
Thank you very much for your time and attention. We sincerely appreciate your support and are more than willing to address any further concerns.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers’ time, valuable feedback, and constructive suggestions. Overall, the reviewers have deemed our work as "highly innovative" (imRj), "well-written" (uZGu, ZcPU), and having the appropriate level of technical detail (ZcPU). They have acknowledged our methodology and experiment design as "novel and well-designed" (imRj, uZGu), with "excels" and "outperforms" being common sentiments. Furthermore, the results of our work have been described as "surprisingly good", "comprehensive", and "informative" (6xjG, uZGu). The main concerns are whether our proposed method is limited to 2nd order ANF formulations, whether it can be generalised to other popular ciphers, whether it can deliver a proof for its predication and the difference between the proposed CryptoANFNet and previous work such as NeuroSAT. To preempt this potential misunderstandings that might impact the evaluation of our work, we first restate our contributions compared to the previous works and address the concerns.
1. **Contributions**
- **Efficient Input Formats and Graph Representations for SAT Instances**:
Based on the Arithmetic Normal Form (ANF), we propose a graph structure to succinctly represent the excessive XOR operations in cryptographic problems. We then design two ways to encode the AND operations in the ANF-based graph to represent SAT instances derived from cryptographic problems. Our ANF-graph is more efficient than the general graph form for SAT Literal-Clause Graph (LCG) and Variable-Clause Graph (VCG).
- **Better Performance**:
We propose (supervised) learning to solve (for the first time to our knowledge) the challenging cryptographic problem: plaintext-ciphertext satisfiability prediction, which could otherwise be intractable by traditional SAT methods. Our proposed GNN-based classifier CryptoANFNet with ANF-based SAT instances as input, achieves a 50x speedup over heuristic solvers, and outperforms the SOTA learning-based SAT solver NeuroSAT by 96% vs. 91% and 72% vs. 55% accuracy on small and large scale datasets, generated from Simon and Speck algorithms, respectively. Additionally, we referred to the benchmark G4satbench to compare CryptoANFNet with various baselines, and the following table show that CryptoANFNet exhibits better performance on different datasets.
- **Efficient Model Structures and Adaptability to SAT instances in high-Order ANF Formulas**:
We design the CryptoANFNet to retain fewer embedding parameters, making the model more efficient.. As described in Section 4.3 of our paper, ANF clauses involve both AND and OR operations. When constructing the ANF graph for CryptoANFNet, there are first-order nodes, positive and negative clause nodes, and high-order nodes like $x_1x_2$. To do the message-passing process efficiently, CryptoANFNet only retains embeddings for first-order nodes (vanilla literals) and clause nodes, which making the model more efficient.
Since high-order nodes are used only as intermediate results in the message-passing process, CryptoANFNet can be adapted to high-order ANF formulas. Using the sparse adjacency matrix, we only need to retain the edges between vanilla literal nodes and higher-order nodes, without storing the entire adjacency matrix, which grows exponentially with higher orders. This approach effectively handles datasets comprising SAT instances with literals in various orders.
- **Key solving algorithm**
We extend to the key decryption problem. We propose a key-solving algorithm that derives ANF-based SAT instances as further simplified by our devised techniques, from the plaintext and ciphertext, and use the output of two derived SAT instances to infer the key values. It boosts the accuracy by 76.5%->82% and 72%->75%, on datasets generated from Simon and Speck, respectively.
2. Discussion
- **Generalization to high-order ANF formulas**: Because high-order nodes are used only as intermediate results in the message-passing process, CryptoANFNet can be adapted to high-order ANF formulas by using the sparse adjacency matrix. CryptoANFNet still outperforms NeuroSAT on second-order ANF formulas dataset and better performance on higher-order ANF formulas.
- **Generalization to other ciphers**: Theoretically, the methods proposed in this paper can be extended to more complex block ciphers that can be represented by ANF formulas. However, in practice, the efficiency of traditional solvers limits the generation of datasets for some complex block ciphers. Some block cipher structures (such as the use of S-boxes) lack efficient algorithms for conversion to ANF or CNF formulas. This results in a large number of clauses and literals, making it challenging to generate efficient datasets for the network to train on.
- **Deliver a proof for the predication**: We can provide proof for our prediction using the key solving algorithm. Using the key-solving algorithm described, CryptoANFNet can sequentially deduce the value of each key bit. Besides, CryptoANFNet can also directly extract satisfying assignments of the key from the embeddings of the literals.
- **Unfair comparsion of the running time between GNN and that of a SAT solver**: We acknowledge the reviewer’s concerns regarding the potential for prediction errors in ML-based solvers. No model can guarantee 100% accuracy across all datasets. But, GNN-based solver can be used to get a preliminary solution for a key or a satisfying assignment with relatively high accuracy. Since the efficiency of verifying the correctness of these solutions is very high, these GNN-based solvers, like CryptoANFNet offer a significant advantage over traditional solvers in terms of application efficiency, aligning with the primary research objective of our paper. There, we perform this comparison to validate the advantages of CryptoANFNet in this regard. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations | Accept (poster) | Summary: The paper proposes a novel neural network reparameterization approach which provides a flexible alternative to traditional coarse-graining methods for molecular simulations. Unlike CG methods that strictly reduce degrees of freedom, the proposed model can dynamically adjust system complexity, potentially increasing it to simplify optimization. Fitting a neural network which maps the reduced space to the fine-grained space, the approach maintains continuous access to fine-grained modes and eliminates the need for force-matching, thereby enhancing both the efficiency and accuracy of energy minimization. The framework incorporates arbitrary neural networks like GNNs, to perform reparametrization, achieving significant improvements in molecular simulations by optimizing energy minimization and convergence speeds.
Strengths: 1. The neural network reparameterization approach gets rid of the force-matching and the non-unique back-mapping in traditional CG methods and shows promising potential in enhancing molecular simulations.
1. The Hessian matrix provides a precise mathematical framework to identify slow modes that correspond to the most stable and significant collective motions in the system. Such an innovative strategy does not need to use an encoder-decoder for FG-CG mapping.
1. Dynamically adjusting the effective DOF allows the model to increase complexity when needed, capturing essential details without a fixed reduction in resolution, thus resulting in more accurate representations of the system. By focusing on slow modes, the method can more effectively explore the energy landscape, avoiding local minima and achieving better convergence to global minima.
1. The authors provide solid mathematical derivation to support their proposed approach.
Weaknesses: 1. The manuscript needs improvement: it is difficult to follow the paper as the contents are not well-organized and some crucial details are missing.
- The introduction and background could be reorganized. It's better to first introduce the key challenge and discuss the approach and key contributions at a high level, while the details of CG and the proposed approach could be moved to Methods. This can also remove the overlapping between the Introduction and Background.
- Due to the lack of a high-level introduction to the overall workflow (usually the last paragraph and Figure 1), the discussion on NN reparameterization and Hessian of potential energies is confusing at first glance because readers are not clear about their roles in the big picture. Similarly, the subsections in the Hessian part propose many theorems and corollaries but do not explicitly address their contribution to the proposed workflow.
- The Experiment part should stress what the experiment settings are, where we can find the results, and what conclusions we can draw. The information is not clearly stressed even though I can find some of it in the paragraphs.
- Some crucial information is missing, for example, the GNN training details and the loss function. Without such information, it's hard to assess the soundness of the method.
- Figure captions also needs improvement. For example, the caption of Figure 1 consists of many short sentences, but the key takeaway are not strengthened.
1. The major contribution on machine learning of this paper is the use of a neural network to map slow modes to FG configurations. However, this neural network model is not novel and does not introduce new ML techniques or architectures. Therefore, I'm afraid that the work may not be fully aligned with the scope of the conference. Additionally, it is unclear whether a neural network is necessary for this mapping, as other simpler or more traditional methods might suffice for this task. A more thorough justification for the use of a neural network over other potential approaches is needed.
1. The manuscript involves biology background knowledge especially in the experiments. However, they are not well explained, making the experimental session more difficult to follow.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The proposed approach has an edge over full-atom simulations and CG methods. Meanwhile, various approaches such as metadynamics with collective variables are commonly used to learn and simulate the dynamics of the system in a reduced space. Have the authors compared the proposed neural network reparametrization method with those approaches?
1. While slow modes represent directions in the configuration space along which the energy landscape is relatively flat so that they capture low-frequency motions, it is not necessarily clear that these are the most relevant directions for the system's evolution that we care about. In other words, the slow modes do not necessarily include more "science". Thus, I'm wondering whether relying solely on slow modes may overlook critical aspects of the system's behavior.
1. Is the reparameterized neural network generalizable to different molecular systems or molecular structures under different conditions (e.g. temperature, pressure)?
1. Empirical potentials are used in this work. However, empirical potentials cannot achieve high accuracy. Have the authors tried machine learning potentials for MD simulations?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed the limitation of lack of experiments for comparison against traditional CG methods. No potential negative societal impact is involved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We agree that the manuscript requires substantial revisions for clarity and flow. Below, we aim to address each of your concerns, and to resolve the criticisms thoroughly.
## Weaknesses
__1.__ We agree that intro and background will benefit from significant reorganization.
1. Our revised intro will start with an overview of challenges in scientific simulations, such as the proliferation of saddle points and local minima leading to suboptimal results. While conventional dim reduction methods like CG offer partial solutions, they encounter issues like back-mapping and force-matching. Instead, we propose an innovative approach using an overparametrized neural ansatz. We demonstrate that CG reparam or a well-designed GNN ansatz, incorporating Hessian slow modes, achieve significantly lower energy states compared to direct optimization.
2. We are making a new Fig. 1 (see attached pdf) to outline the methodology.
3. In Sec. 2 on the Hessian, we'll clarify the motivation for using slow modes, rooted in the difference in fast vs slow mode evolution rate, which causes slow convergence at saddle points. Our goal is to adapt the optimization process to grant direct access to slow modes, hoping that it helps escape such saddle points. However, this approach faces challenges: 1) Changes in the Hessian may alter the slow modes during optimization. 2) The need to modify the optimization to favor slow modes. We address these by showing the stability and robustness of slow modes and by proposing linear CG and GNN reparam. Our experiments show superior efficacy of the GNN approach.
4. We add GNN parameter details in a table in the appendix. For the experiments in Fig 1, the GNN hidden dims are [20,10, 3]. We used $n/3$ slow modes $\Psi$ to get adjacency matrix $A= \Psi\Psi^T$ used in the GNN layers with output $h^l = \sigma(Ah^{l-1}W +W_s \odot h^{l-1} + b)$ with self-loop weights $W_s$ and biases $b$ .
5. We are improving figure captions.
__2.__ We appreciate your comments and recognize the need for a clear justification of our approach. While our NN model is not a new architecture, using it to map Hessian data to FG modes is a novel approach within physics simulations. This use of NN diverges from traditional ML, mainly centered around supervised and unsupervised learning. We use NN as an ansatz in scientific optimization. This opens up new potential uses of NN in the realm of AI for science. It should also be useful in other ML tasks where saddle points or flat minima are problematic.
Furthermore, our results clearly demonstrate that simpler methods like linear slow mode reparam often fall short compared to GNN. For instance, in Fig. 1 "Pure LJ Loop," where the Lennard-Jones (LJ) potential complicates optimization due to its flatness and shallow minima, traditional methods like GD and linear CG reparam achieve low energies but fail to accurately model the complete coil formation, which the GNN effectively accomplishes. This is evidence for the efficacy of our neural reparam in complex energy landscapes, making our work highly relevant for ML-focused venues.
__3.__ We agree with the need for clearer bio background and will make the following short additions to aid readers:
1. **Overview of Experiments**: We'll introduce two main experimental setups:
- **Energy Minimization on Synthetic Systems**: Forming a coil using LJ potentials to mimic molecular forces.
- **MD for Protein Folding**: Using the AMBER force field.
2. **Details on Synthetic Coil Experiments**:
- **Quadratic Bonds + LJ**: We'll describe the energy function $E = E_{bond} + E_{LJ}$ where $E_{bond}$ and $E_{LJ}$ are the quadratic bonds and LJ interactions, designed to form a coil.
- **Pure LJ**: Focuses on the challenge of optimizing in a flat energy landscape of LJ, highlighting the difficulty in achieving the lowest energy state due to extremely small gradients.
3. **Protein Folding Using MD**:
- We'll provide a primer on MD's role in studying proteins, emphasizing the use of the AMBER force field to model essential interactions like bond stiffness and angle constraints.
- We simplify MD by omitting solvents and focus on energy minimization using AMBER params from OpenMM. This aligns with our synthetic experiments, facilitating interfacing with ML frameworks (pytorch) to implement our approach.
## Questions
1. As we understand it, metadynamics is mostly concerned with exploring and mapping the free energy landscape, so the goal is different. Nevertheless, our method can be combined with metadynamics by replacing the collective variables with our reparam.
2. We recognize the concern. But note that, while slow modes capture low-freq motions and help navigate flat energies, our approach integrates all modes both in the final relaxation phase as well as via residual connections in the GNN, ensuring no critical dynamics are missed. As in lines 29-33, our motivation for using slow modes is that they converge slowly and that including fast modes forces us to use smaller learning rates for numerical stability. However, the flexibility of our GNN method allows the model to adjustment the weight of fast and slow modes to decrease the energy faster.
3. Potentially yes, if the slow eigenvectors are transferable. The GNN graph comes from the Hessian backbone. In some settings such as a protein complex the total Hessian may be approximately block-diagonal, each components forming a block. The slow modes are then mostly localized on individual components. The learned GNN weights (encoding the actual atom locations, for example) are more challenging to transfer, but the loss should be invariant to symmetries of the system, such as SE(3) for proteins.
4. We have not, but that is a good suggestion. Our method can be used with any differentiable potential. We used the empirical AMBER force-field because of its popularity in MD.
---
Rebuttal Comment 1.1:
Comment: Thank the author for the response. The rebuttal has addressed most of my concerns. The methods is actually interesting and refreshing for ML research for scientific applications. With that, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much, we really appreciate your time. You comments really made a difference in our paper's structure and presentation. We think the reparametrization idea is an underexplored aspect of neural nets and that it has a lot of potential, especially as a new class of ansatze for scientific problems. We are also working on generalizability and transferrability of the modes to larger molecules. Fianlly, we are applying this methodology to other systems with metastable and slow dynamics such as glassy systems. We observe in some regimes the method may provide more advantage than in other regimes. | Summary: This paper proposes an efficient approach for finding the optimal conformation in terms of the energy function. By identifying the slow modes, the proposed method can reduce the computational complexity while extracting the core movement for simulating the dynamics. Specifically, the authors observe a connection between the Laplacian and the Hessian of the targeted potential functions, leading to the coarse-graining by identifying the slow modes.
Strengths: * The proposed method for a proxy of the unweighted adjacency matrix is reasonable and sound.
* The connection between the graph laplacian and the hessian of energy function is interesting and general.
* The proposed method is efficient compared to the gradient descent method which is widely used for simulating the protein dynamics.
Weaknesses: * The efficiency of this work is largely dependent on the complexity of the targeted potential function as it still requires computing the gradient from the potential function. In other words, if we have a large-scale AI model that can more efficiently predict the most stable structures in one shot such as the AlphaFold series, the proposed method might not be helpful to find the optimal conformation.
* The optimization process needs to be clearly described. I suggest the authors provide an algorithm for how to optimize the structures including the intervention of the potential functions and the model architecture.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Is it possible to find the globally optimal structures? How much does the initial state affect the final predicted structures?
* For the large-scale protein, how much can the proposed method reduce the computational cost?
* Does the proposed method can significantly reduce the memory cost?
Typos:
line 131: mdoes -> modes
line 176: is relies -> relies
line 176: eighted -> weighted
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As the authors noted, more experimental results are needed to show the effectiveness of the proposed method. Especially, I recommend comparing the computational cost and the quality of the predicted structures with the state-of-the-art structure prediction models on large-scale proteins.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time taken to review our work. Please check our shared rebuttal above for discussion on improving presentation and new experimental results.
## Weaknesses:
1. __Efficiency:__ We agree that models like AlphaFold take a different approach. However, we think our approach is crucial for domains where we do not have such huge database as proteins. Our approach is inherently "data-free," offering advantages in scenarios where access to molecular dynamics data is limited or unavailable, yet there is a need to simulate the entire system. Drug discovery or material design would benefit from faster ab initio simulations. This will allow us to go beyond learning patterns in existing molecules and experiment with entirely new ones.
2. __Clarity of method:__ Please see our shared response at the top of the page. We have included an enhanced figure in the attached pdf that illustrates the process of extracting coarse-grained modes and their integration into the Graph Neural Network (GNN) reparametrization framework. We also sketch out the GNN architecture in there. This addition aims to clarify the methodology and strengthen the overall presentation of our approach.
## Questions:
1. __global minima and init:__ The energy landscape of proteins is inherently "rugged," (i.e. has many local minima and saddle points) making the search for a global minimum particularly challenging. Consequently, our objective is to identify the most favorable local minima. Even for relatively small proteins, varying initial conditions can result in different local minima. To illustrate how sensitive our methodology is to these initial conditions, we have included a figure in the pdf (see figure c and its caption). For the small protein 2JOF we find starting from three different initially unfolded positions, the final energy obtained by direct gradient descent come out different.
2. __Large proteins:__ If the challenges to computing the Hessian for large proteins can be overcome (See our response to rev rxS5, Q3: scaling) our method could save computational costs because it generally requires a fraction of the number of iterations to reach a given energy. (see figure a in pdf). However, each step takes more compute due to forward and backward pass through the NN. With efficient hardware such as GPU, this overhead could reduce significantly.
3. __memory cost:__ We think our model actually requires more memory due to the extra neural network. The extra memory cost can be linear or superlinear in the number of particles, depending on how the number of slow modes is chosen to scale with the system size. Our argument is that in the era of large compute, this memory overhead is ok, as the overparametrization can yield significant benefits in terms of convergence to deeper minima and avoiding saddle points.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' clarification. The authors clearly describe their methods by providing additional figures.
In my understanding, there are technical pros and cons for this work. For pros, it is data-efficient and requires less memory cost especially compared to the large structure prediction model such as AlphaFold. However, it is disadvantageous in terms of the computation cost of the Hessian which hinders increasing the scalability, and is dependent on the initial state as it can fall into the local minima conformations.
On the other hand, in the aspect of the methodology, the connection between the Hessian and Laplacian and identifying effective DOF are promising, which can lead to other future works that are related to protein dynamics.
By considering these points, I raise my score to 6
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your consideration. Yes, we agree with you about the limitations and are working to address that. One observation we had regarding protein dynamics was that, due to the strong quadratic forces from chemical bonds, the slow modes of the Hessian overlapped very highly with the slow modes of the Laplacian of _molecular graph_ alone (we will share the figure soon). This is a big deal, because the molecular graph is very sparse, with average degree between 2-3. Hence, instead of the expensive Hessian computation, we found using the sparse molecule graph led to faster performance. Our final results for protein folding all use the molecule graph as a proxy instead of the Hessian. This makes our method scalable for even large molecules. Additionally, our method could be applied to coarse-grained dynamics and aid CG models find deeper energies.
We are also actively working on the DOF from Hessian problem and have found further strong results, suggesting one can explore the phase space of molecules efficiently using these DOF. We also agree that it merits its own paper. | Summary: This paper presents a novel approach to molecular simulations using neural network reparametrization as an alternative to traditional coarse-graining methods. The key idea is to reparametrize fine-grained modes as functions of coarse-grained modes through a neural network, maintaining continuous access to fine-grained modes and eliminating the need for force-matching. The authors demonstrate improved performance on Lennard-Jones potentials and protein folding simulations compared to conventional methods.
Strengths: * Theoretical foundation: The paper provides a solid theoretical analysis of the properties of physical Hessians and how they relate to slow modes in the system.
* Experimental results: The approach shows promising results on both synthetic systems (Lennard-Jones potentials) and protein folding simulations, demonstrating faster convergence and lower energy states in many cases.
Weaknesses: * Unclear presentation: The paper lacks a clear problem definition and objective function. A flowchart or algorithm illustrating the coarse-graining process, GNN structure, and how it drives molecular dynamics would significantly improve clarity.
* Insufficient explanation of DOF experiments: The paper would benefit from an algorithm or graphical illustration explaining the process of finding effective degrees of freedom.
* Inadequate explanation of GNN graph structure: The authors could have done a better job explaining the graph used in the GNN, particularly the force constant matrix and its physical meaning.
* The paper would benefit from a more extensive comparison to state-of-the-art coarse-graining and optimization methods. Additionally, a more thorough analysis of the computational costs and scalability of the approach would strengthen the paper.
Minor Issues:
* Typos on Line 131, 176
Technical Quality: 2
Clarity: 1
Questions for Authors: * What is $n_0$ in line 235?
* Does the design of your GNN need to consider equivariance such as SE(3)? i.e. if you rotate your system, the Hessian and the Laplacian would be transformed in a predicitable way.
* How well do you expect this method to generalize to more complex force fields or larger biomolecular systems?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We try to address them below. Please also read our general rebuttal above, which details our plan for improving presentation and more.
## Weaknesses:
1. __Unclear presentation:__ We agree. Please see above and the attached pdf for flow-chart and algorithm.
2. __Effective DoF:__ we will add more details, but the gist of it is using Corollary 2.3.1 and 2.3.2 as a loss function to find the symmetry operators $L$. We can approximately satisfy 2.3.1 when $\|LH\|< \epsilon$, which happens when $L$ lives in the slow mode subspace. Next, 2.3.2 can be satisfied when $[L,H]=0$, so we can solve an optimization problem $L_0 = \mathrm{argmin}_L \|[L,H]\|$.
3. __GNN graph:__ please see attached pdf and general comment above. The adjacency of the graph (nxn with n being number of particles) is $A = \Psi\Psi^T$ with $\Psi$ being the slow modes.
4. __Comparison with SOTA CG:__ True, but unfortunately existing MD frameworks are very opaque and difficult to modify. We tried and failed to interface our method with OpenMM. This seems to be general problem with the field of MD. However, we recently learned about an endeavor to implement MD in Jax. We will try to migrate to that. However, currently, we are unable to offer better comparisons with CG. We decided to present the approach inspite of lacking CG comparison because it is a flexible alternative to CG and can even be combined with CG.
## Questions:
1. __n0__ In the GNN reparam, n0 denotes the node feature dimension of $Z_h$, which is the input to the GNN, thus n0 is also the input dim of GNN. Recall, the atom positions $X$ are reparametrized as $X = \rho(Z)= GNN_\theta (Z_h)$ (as in equation 2), where $Z=(Z_h, \theta)$ and $\theta$ are the GNN weight, with n0 being the input hidden dimension.
2. __Equivariance:__ very good question! Yes! The formula for the Hessian backbone (eq 7) takes a norm over spatial indices and is therefore invariant under SE(3). We will emphasize this point in the paper. Thus, the Laplacian is also SE(3) invariant and the slow modes do not have a spatial index (scalars under SE(3)).
3. __Scaling:__
Our methodology is compatible with any differentiable potential energy function. The primary limitation for large biomolecular systems is computing the Hessian matrix to get the slow modes. In biomolecules and MD the loss is generally the sum of pair-wise or triple interactions and there fore the Hessian can also be written as the sum of such easy-to-compute Hessians. Thus we don't have to compute the full Hessian matrix until we want the spectrum. We have relied on existing methods such as Lanczos and Davison to get the slow spectrum. Whether the complexity of the force field becomes a burden for scaling depends on the computation graph of the Hessian from autograd, but our guess is that it won't be a problem.
The GNN can be scaled up by sparsifying the adjacency or other methods people have developed, so we don't expect the reparamterized part to be an issue.
---
Rebuttal Comment 1.1:
Title: Replies to Rebuttal
Comment: I acknowledge that I have read the rebuttal. I ll maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you, we appreciate your time. We wanted to add a couple more points:
* __Scaling for biomolecules:__ We forgot to mention, the method is already very scalable, as follows. We have observed that the Hessian slow modes overlap strongly with the slow modes of the Laplacian of the _molecule graph_. This is likely because the molecule enters the Hessian via quadratic bond energies. We found that just using the full molecule graph inside the GNN, instead of the Hessian slow modes, also yielded good results. Since the molecule graph is quite sparse, and the number of edges scale linearly with the nodes (average degree 2-3), passing through the GNN becomes linear in $n$, making it quite scalable.
* __Equivariance:__ The slow modes can also be made equivariant to other spatial group by using the correct invariant metric. Ex: The Lorentz group $SO(1,3)$ from special relativity preserves the Minkowski metric $\eta = \mathrm{diag}(-1,1,1,1)$, meaning $g^T\eta g = \eta$ for $g\in SO(1,3)$. If instead of the $L_2$ norm on spatial indices we contract them with this metric $\mathbf{H} \_{ij} = \sum_{\mu\nu\rho\sigma} H_{ij}^{\mu\nu}H_{ij}^{\rho\sigma} \eta_{\mu\rho}\eta_{\nu\sigma}$, we get equivariance under the Lorentz group.
We hope you also find that the flow-chart and GNN structure plots address your concern and that you still consider the work for a higher score. | Summary: This paper proposes a novel approach for molecular simulations using neural network reparametrization. The authors first motivate the need for this work, specifically the traditional coarse-graining (CG) methods reduce the number of degrees of freedom (DOF) to improve computational efficiency. However, they require back-mapping and force-matching steps, which can be cumbersome.
The major contribution is the framework of hessian backbone that allows calculation of hessian using weighted graph Laplacian(although restricted to pairwise invariant potentials). With GNN reparameterization the experiments on synthetic coil shows good improvement in speedup, but not necessarily in reaching lower energies. The proposed system instead of reducing DOF, allows for a flexible representation of the system. Their neural reparametrization approach is not limited to reduce DOF but can also increase them when required.
The paper showcases the effectiveness of the method on LJ systems and protein folding simulations. Results suggest the reparametrization approach, especially using GNNs, can achieve ower energy states compared to traditional CG methods and also faster convergence.
Strengths: 1) Proposed a hessian backbone approach to get slow modes using graph Laplacian
2) Eliminates the need for force matching and back mapping by reparameterization using slow modes and GNN
3) Average over perturbed configurations taken to address dynamic variation in Hessian
4) Data-free optimization: Doesn't require extensive training data, unlike traditional machine learning approaches.
5) Code is also provided.
Weaknesses: 1) Different choices for fraction of eigenvectors in CG equations are mentioned 3x (#AminoAcids), 30%, 50%, and 70%, but the results corresponding to them are not shown, it is important as it is related to the 'epsilon' in slow mode calculations.
2) Literature review is weak, several works both recent and classical on optimization in molecular systems are missing e.g. learned optimizers[1], graph reinforcement learning [2], FIRE [3]
4) Presentation issues: what is n and d in line 85? The paper can be made more reader friendly if there is a table which has symbol/variable names and their meaning.
Missing citations:
line 56-59
1. Traditional optimization in physics-based models, like (MD), faces unique challenges due to the
shallow nature of these models, where physical DOF are the trainable weights. Additionally, the
interactions occur at multiple scales, from strong covalent bonds to weak van der Waals forces,
leading to slow convergence in gradient-based method
The authors should cite relevant papers for the above paragraph.
[1]Merchant, A., Metz, L., Schoenholz, S.S. & Cubuk, E.D.. (2021). Learn2Hop: Learned Optimization on Rough Landscapes. <i>Proceedings of the 38th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 139:7643-7653 Available from https://proceedings.mlr.press/v139/merchant21a.html.
[2]Bihani, V., Manchanda, S., Sastry, S., Ranu, S. & Krishnan, N.M.A.. (2023). StriderNet: A Graph Reinforcement Learning Approach to Optimize Atomic Structures on Rough Energy Landscapes. <i>Proceedings of the 40th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 202:2431-2451 Available from https://proceedings.mlr.press/v202/bihani23a.html.
[3]Bitzek, E., Koskinen, P., Gahler, F., Moseler, M., and Gumbsch, P. Structural relaxation made simple. Physical review letters, 97(17):170201, 2006.
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions
1) How are the inputs to GCN network 'Z_h0' initialized?
2) Could you clarify what is the loss function used to train the GNN network? details related to training shall be provided.
3) Can the approach be used for disordered glassy systems, e.g. Kob-Anderson Binary LJ Model Glass which is known to have slow dynamics. Does it gets stuck in higher energy minima? It will be good to show the coarse grained approach on glassy systems.
4) The authors mention they use Adam optimizer with a learning rate 10−2. Can the authors show impact of learning rate, lower and high Learning rates? Does the conclusion remain same?
Minor Comments
1) Typo in line 143: Power of z in repulsive term
2) Typo in line 176: spelling of 'weighted'
3) Line 131 typo: mdoes
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you, we appreciate your pertinent comments. Please also check our general response above about improvements to the presentation.
## Weaknesses:
1. The plot was mistakenly omitted. We are adding it to the appendix.
2. Thanks, adding the citations.
3. n is the number of particles, d is the embedded dim, e.g. d=3 for 3D coordinates x,y,z.
4. Missing citations: we are adding the citations for that paragraph
## Questions:
1. For $Z_{h0}$ initialization, we used two methods: 1) random; 2) started from random but optimized together with GNN weights to match a given initial configuration of atoms. Method 2 introduces a minimal overhead: for protein 2JOF with 284 atoms it converges after 900 steps, taking 0.67 seconds.
2. The GNN loss is the energy function $\mathscr{L} = E$! This is the crucial point about reparametrization. We still deal with $E(X)$ but now $X$ is $X = \rho(Z)$ (as in eq 2) where $Z=(Z_h, \theta)$ and $\theta$ are the GNN weights. So the optimization problem changes from $\mathrm{argmin}_X E(X)$ to $\mathrm{argmin}_Z E(\rho(Z))$ for the GNN or other reparametrizations.
Our framework is unsupervised. The training of the GNN is the goal: The positions of atoms are encoded in GNN weights and finding the final atom positions means minimizing the loss function which is the potential energy function. After the training, there is no inference step.
Our GNN setup consists of two GCN layers with residual connections, followed by a projection to 3D. $Z_h$ and each GCN layer comprises 100 hidden dimensions. During training, once the energy curve begins to plateau (a minimum change in energy of 0.1 and 20 patience steps), we stop the GNN reparam. We then continue to optimize using full FG modes again using the same loss (energy) function as GNN.
3. This is a great suggestion. We believe that the Kob-Andersen Binary Lennard-Jones (LJ) glass model (KAM) could also be a potential application. KAM closely resembles our synthetic pure LJ loop simulations, with the key difference being that, in KAM, particles do not have a fixed underlying graph during optimization. Consequently, in KAM, we need to construct a spatial proximity graph and update it dynamically during training to leverage the GNN framework. Due to time constraints, we haven't conducted extensive experiments, but we have run a few, both a gradient descent (GD) and a GNN reparam.
4. Yes, we have run sweeps of the learning rate and we find the lowest energy is consistently achieved by a GNN, though direct GD beats some GNN that have large LR.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I thank the authors for their response. After these clarifications, I increase my score.
I agree with other reviewers, the presentation needs to be improved. Also, the new figure with learning rate sweep could be a continuous plot(instead of the discrete version), showing change in energy.
---
Reply to Comment 1.1.1:
Comment: Thank you, we will also produce the loss curve plots in the coming days. We will also try to include our preliminary results on the Kob-Anderson system. We are definitely planning to apply this method to glassy systems. It would be interesting to investigate if in certain hard regimes this methodology would provide more advantage than in other regimes. | Rebuttal 1:
Rebuttal: Thank you all for very constructive comments. Some issues were raised by multiple reviewer. Here we will address the shared concerns. Below, we respond point-by-point to each review.
## Presentation
We agree that the presentation of the paper needs significant improvement and reorganization, especially intro and background. We are doing the following:
1. __Problem statement:__ Our revised intro will start with an overview of challenges in scientific simulations, such as the proliferation of saddle points and local minima leading to suboptimal results. While conventional dim reduction methods like CG offer partial solutions, they encounter issues like back-mapping and force-matching. Instead, we propose an innovative approach using an overparametrized neural ansatz. We demonstrate that CG reparam or a well-designed GNN ansatz, incorporating Hessian slow modes, achieve significantly lower energy states compared to direct optimization.
2. __Flow-chart and algorithm:__ We are making a new Fig. 1 (see attached pdf) to outline the methodology. We are also adding the step-by=step of the algorithm (detailed next).
3. __Motivation for using Hessian slow-modes:__ In Sec. 2 on the Hessian, we'll clarify the motivation for using slow modes, rooted in the difference in fast vs slow mode evolution rate, which causes slow convergence at saddle points. Our goal is to adapt the optimization process to grant direct access to slow modes, hoping that it helps escape such saddle points. However, this approach faces challenges: 1) Changes in the Hessian may alter the slow modes during optimization. 2) The need to modify the optimization to favor slow modes. We address these by showing the stability and robustness of slow modes and by proposing linear CG and GNN reparam. Our experiments show superior efficacy of the GNN approach.
4. __GNN details:__ We add GNN parameter details in a table in the appendix. Our GNN consists of GCN layers with self-loops and residual connections. For the experiments in Fig 1, the GNN hidden dims are [20,10, 3], i.e. starting from $Z_h$ with 20 dim embedding, and one GCN layer with 10 hidden dims and a projection layer down to 3D. For the protein experiments we had dims [100,100,3]. We used $n/3$ slow modes $\Psi$ to get adjacency matrix $A= \Psi\Psi^T$ used in the GNN layers, which are GCN with output $h^l = \sigma(Ah^{l-1}W +W_s \odot h^{l-1} + b)$ with self-loop weights $W_s$ and biases $b$ . Here $h^l \in \mathbb{R}^{n\times d_l}$ ($n$ particles in $d_l$ hidden dims). We concatenate outputs along feature dims of all GCN layers into $h = [Z_h| h^1...]$ (dense residual connections) and pass them through a final projection along feature dims to get them to $d=3$ dimensions.
5. __Figures:__ We are improving figure captions. We are also including missing figures on using different number of Hessian modes (30%-70%) for proteins.
6. __More experiments:__ We are adding new figures comparing our final energies with OpenMM’s `simulation.minimizeEnergy` which is the most head-to-head comparison with our method. However, we OpenMM also uses many tricks for efficiency, such as distance cutoff for forces and may use Barnes-Hut or other space partitioning, which we haven't implemented. Thus the only fair comparison we can make right now is against direct gradient descent (no reparam). We run a __learning rate sweep__ for some protein simulations. We find the lowest loss and closest to OpenMM minimum to be consistently GNN (see attached pdf). We also find that at a similar number of iteration steps (5k) OpenMM was at much higher energies compared to the GNN.
## Flow-chart and algorithm
Most of the reviewers asked for a clear algorithm or flow-chart of our methodology. We are including the flow-chart in the attached pdf and will add the algorithm steps as follows:
1. Compute functional Hessian of the energy function $H = \nabla \nabla \mathscr{L}$ w.r.t. Particle positions $X$. Because $X \in \mathbb{R}^{n\times d}$ ($n$ particles with $d$ features), this Hessian will have four indices as $H_{i j}^{\mu\nu} = \partial^2 \mathscr{L}/\partial X_i^\mu \partial X_j^\nu$
2. Evaluate the Hessian over a small ensemble of perturbed positions $\mathbf{Samples}(X) = \{X’ = X+\delta X\}$ and compute the __Backbone__ $\mathbf{H} \_{ij}$ $= \sum_{X’\in \mathbf{Samples}(X)}\sum_{\mu\nu} H_{ij}^{\mu\nu}(X’)^2$(eq 7).
3. Compute or approximate $k$ eigenvectors $\Psi$ of $\mathbf{H}$ with smallest magnitude eigenvalues.
4. Perform neural reparametrization: express $X$ using a neural ansatz $X = \mathrm{NN}(\Psi,\theta)$ where $\mathrm{NN}$ uses the Hessian modes $\Psi$ and has trainable parameters $\theta$.
5. Optimize the same los $\mathscr{L}(X)= \mathscr{L}(\mathrm{NN}(\Psi,\theta))$ over NN parameters $\theta$ instead of the original $X$.
The procedure is quite simple and could be summarized in the following pythonic pseudocode using pytorch (actual implementation slightly different for efficiency):
```python
# Hessian backbone
H = functional.hessian(Loss)
H_samples = tensor([H(x) for x in samples])
H_backbone = H_sampels.norm((2,4)).sum(0)
H_lap = Laplacian(H_backbone)
# slow modes
eig_vals, Psi = eigh(H_lap)
Psi_slow = Psi[:, argsort(abs(eig_val))[:k]]
# reparametrization
A = Psi_slow @ Psi_slow.T
gnn = GNN(A, hidden_dims = [h,h,3])
Z = random.normal(n,h).requires_grad()
X = gnn(Z)
# optimization
optimizer = Adam(parameters = [Z]+list(gnn.parameters())
for i in range(steps):
optimizer.zero_grads()
loss = Loss(X)
loss.backward()
optimizer.step()
```
Pdf: /pdf/4f730e3ab372fc24289a9a5fa4476e242b48aaa1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes | Accept (poster) | Summary: Following the discussion with the authors, I am increasing my score to weak accept.
----
This paper presents a new branch-and-bound-based verification algorithm for neural networks with ReLU activation. The main idea is to produce additional constraints to the verification problem by identifying small combinations of neuronal states which would not lead to an adversarial input, so that these combinations are not explored again in another part of the branch-and-bound tree. Because smaller sets are always better, the authors develop an algorithm for reducing the number of neurons involved in such combinations. They also leverage multiple branch-and-bound trees explored in breadth-first search as a means of producing constraints involving fewer neurons.
Strengths: The ideas presented by the authors are intuitively reasonable and unsurprisingly effective as a consequence. In fact, they have been studied in other contexts with other names. Most notably, the idea of producing a constraint based on a path of the branch-and-bound tree is commonly known as nogood learning (see [1] for an early use in constraint programming, such as with the concept of a conflict set in page 9; and [2] for the first application to mixed-integer programming). The canonical application has been to identify infeasible nodes, but it is not a big stretch to apply the same idea for when the objective function is likely suboptimal (positive objective function when what we need is a negative value). Moreover, the idea of exploring multiple branch-and-bound trees in parallel is not far from solver restarts [3], which have been widely used in constraint programming and subsequently adopted in mixed-integer programming as well.
All in all, this is a paper that speaks for itself because the right ideas were used and better numbers were obtained. Issues come next.
[1] https://ftp.cs.ucla.edu/pub/stat_ser/r77-II-reprint.pdf
[2] https://www.cs.cmu.edu/~sandholm/nogoodsForMip.techReport06.pdf
[3] http://www.cs.cornell.edu/selman/papers/pdf/98.aaai.boost.pdf
Weaknesses: One problematic omission in this paper is that nowhere it is said that MIP solvers also use branch-and-bound. Otherwise, it sounds as if MIP solvers are some sort of black box magic that tries to solve problems in some inefficient way. There is also some broad generalizations such as "MIP solver may not return any cuts before timeout" (149): Which solver? When? Under which conditions? All that a generic MIP solver needs to generate a Gomory cut is the solution of the LP relaxation and the corresponding tableau.
The fact that branches are defined in terms of neuron outputs is also problematic because there is an overlap between the two resulting subproblems when x=0. The effectiveness of branch-and-bound comes from partitioning the feasible set, and moreover from slicing off the parts of the LP relaxation that are not feasible according to the MIP formulation (such as when z=0 is one subproblem, z=1 is another subproblem, and 0<z<1 gets thrown away). Because your implementation is extending someone else's work, I believe that this might be a mismatch between the code and what the paper describes. Since the z variables are nevertheless a way to parameterize the LP problems to be solved along the way, there would be no problem in defining the constraints directly in terms of z instead of x. In fact, this is exactly what you do when you finally formalize the cut that you are using.
When it gets to Proposition 3.1, the statement is unnecessarily complex: why define a diagonal matrix only to multiply it by a vector of ones immediately after? In terms of notation, it doesn't make sense to have output variables indexing which neurons should be removed. Instead, you could use only the corresponding indices. Hence, Z = {1, 3} instead of Z = {x_1, x_3}. Moreover, this is a commonly known cutting plane in MIP, perhaps first introduced in 1972 by [4] and at this point widely known and applied in the MIP community. It would have been better to use the conventional notation $\sum_{i \in Z^+} z_i - \sum_{i \in Z^-} z_i \leq |Z^+| - 1$ directly in the paper rather than using it in the appendix and keeping a less readable version in the paper.
[4] https://epubs.siam.org/doi/10.1137/0123007
Algorithm 1 relies on a threshold, but values or experiments to find the right one are not mentioned anywhere.
The paper also talks very briefly about prior work, to the point that it sounds as if specialized cutting planes for MIP problems involving neural networks were never proposed before. However, your reference 1 (Anderson et al.) does exactly that. The same is true about [5-6], and a broader perspective about this area can be found in Section 4 of the survey [7].
[5] https://arxiv.org/abs/1810.03370
[6] https://arxiv.org/abs/2102.04373
[7] https://arxiv.org/abs/2305.00241
Other major comments about the writing follow below.
A) The "The NN Verification Problem" paragraph in Section 2 is not very precise because it confuses values with variables and do not properly qualify functions when values are dependent on inputs. What follows is a possible correction to that. In line 84, replace "scalar values" with "scalar variables". In line 85, add "the values of" after "for". In line 86, replace ", when they depend on a specific x" with "for a given input x". In line 87, add to "that limit [the post-activation values of] individual neurons".
B) The text talks about an example in which you no longer want to see both neurons 1 and 3 inactive at the same time, but the example in Figure 1 shows neuron 3 active.
Other minor comments about writing follow below.
10-11: "cutting planes constraints": this is not an adequate expression (all constraints define a plane); use either cutting planes, valid inequalities, constraints, or cuts.
21-22: First sentence is a strong statement with no reference provided and never discussed again.
32: "statuses" -> states (again in 53)
67: "discussed" -> discuss (not past tense)
98: "> 0" -> < 0?
98: "pro[p]erty"
125: "status" -> state
139: remove "BIC"
149: "and [the] MIP solver"
162: "positive regime": active regime?
163: "in [the] LP formulation"
Authors repeated twice in reference 11.
References 13 and 14 are the same.
Use {CDCL} instead of CDCL in reference 28 and {SAT} instead of SAT in reference 30 to keep the casing in the output, since otherwise what you get is cdcl and sat.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) In light of the comments about nogood learning and restarts, can the authors please reframe their contribution in the paper in terms of the existing related work?
2) Can you please qualify your discussion about MIP solvers by being more precise in what they can and cannot do?
3) Are you branching on the output of the neurons (x) , or on the state of the neurons (z)?
4) Can you simplify Proposition 3.1?
5) Can you please describe what thresholds were used for Algorithm 1, and how they were obtained?
6) Can you please rectify the discussion about cutting planes for neural networks by acknowledging prior work on this?
7) Can you please comment on incorporating the major and minor corrections to writing?
8) In lines 145-146, what do you mean by "[49] solved these cuts using GPU-accelerated bound propagation"? What is the concept of solving a cut? Do you mean generating the cut? Do you mean solving the LP relaxation after the cuts are included?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors were very upfront about limitations to their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and valuable questions. We hope the reviewer can reevaluate our paper based on our response below:
- Q1: Reframe the contribution in the paper in terms of the existing related work about nogood learning and restarts.
Thank you for your valuable input, which allows us to view our research from a broader perspective. Other NN verification papers, such as beta-CROWN, have utilized branch-and-bound techniques specifically tailored to NN verification. While branch-and-bound is a general technique, its adaptation to the unique challenges and requirements of NN verification is what distinguishes our work. We will make sure to cite these works appropriately and discuss their relevance to our context.
- Q2: Discussion about MIP solvers more precisely.
Thank you for your comments highlighting the need for a more detailed discussion regarding MIP solvers. We will clarify in our revision that MIP solvers, including CPLEX which we primarily used, employ branch-and-bound algorithms as a fundamental part of their operation. We found that in many of these large-scale benchmarks, the MIP solver fails to solve the initial (root) LP relaxation for the verification problem within the timeout threshold, so branch-and-bound has never started, and also no effective cuts can be generated. For instance, in the cifar100 benchmark with networks size between 5.4-31.6 M and tiny-imagnet with a network size of 14.4 M, CPLEX was unable to generate any cuts within 200 seconds (timeout threshold for this benchmark). These examples will illustrate the limitations more concretely and contextualize the generic statements we previously made, which we will add this discussion to our revised paper.
- Q3: Clarification of branching on the output of $x$ or $z$.
By definition (line 100) of $z$, $z=0$ is equivalent to $x \leq 0$, and $z=1$ is equivalent to $x \geq 0$. This is encoded in Appendix A1, Equation 12 and 13: By inserting z=0 (z=1), x is restricted to the respective non-negative (non-positive) domain. Therefore, branching over $x \leq 0$ vs $x \geq 0$ is equivalent to branching over $z=0$ vs $z=1$. Note that technically we cannot use strict inequality like $x > 0$ or $x < 0$, so the $x=0$ case is needed in both branches. The equvalence of the two formulations has been used in prior work such as GCP-CROWN.
- Q4: simplify Proposition 3.1
We will remove the diagonal matrix and simplify the notation. Additionally, we acknowledge that this cutting plane is a commonly known concept in MIP, first introduced in 1972 by [4]. Here is the revised cut expression:
$$\sum_{i \in \mathcal{Z}^{+}} z_i - \sum_{i \in \mathcal{Z}^{-}} z_i \leq |\mathcal{Z}^{+}| - 1$$
- Q5: Thresholds used for Algorithm 1 and how do we obtain it
The specific threshold value and its justification were detailed in Section A.3.1. The threshold, `drop_percentage`=50% was determined based on preliminary experiments during code development and empirical observations. We conducted multiple trials to assess the impact of different threshold values on the performance and effectiveness of the algorithm. Through these experiments, we observed that a 50% threshold provided a balanced trade-off between pruning weak constraints and maintaining a robust constraint set. We will ensure it is included in future revisions for clarity.
- Q6: Discussion about cutting planes for neural networks by acknowledging prior work on references
We acknowledge that our discussion on prior work was brief, and we appreciate the opportunity to rectify this by acknowledging significant contributions in this area.
1. We will cite and acknowledge Prior Works:
- Anderson et al. propose cutting planes specifically designed for neural networks by leveraging convex-hull constraints.
- [5] explores empirical bounds on the linear regions of deep rectifier networks, contributing to the understanding of the behavior and optimization of neural networks.
- [6] discusses partition-based formulations for optimizing ReLU neural networks, providing a framework for mixed-integer optimization in this context.
- [7] Section 4.2.4 offers a comprehensive overview of cutting planes and related techniques applied to neural networks, situating our work within a broader context.
2. Our work differs from Anderson et al. [1] in several key aspects:
- **Cutting Plane Design:** While Anderson et al. focus on selecting the most violated constraints among an exponential number of convex-hull constraints in **one layer**, our constraint can involve neurons from **any layer**, and our constraint can be easily found in infeasible subproblems during BaB.
- **Computational Expense:** Anderson et al. propose a linear-time method for constraint selection, which can still be computationally intensive for large-scale problems. In their experiments, they use neural networks within 2000 ReLUs, in contrast, we can scale to any network that beta-CROWN works, such as CIFAR100-ResNet-large, a network with 15.2M parameters and 286820 ReLU neurons.
- Q7: Comment on Corrections to Writing
Thank you for your detailed feedback on the writing. We appreciate the opportunity to improve the clarity and precision of our paper. We will carefully review and revise the specified sections to incorporate these corrections.
- Q8: What does "solve these cuts" mean?
To clarify, we don’t mean to generate the cut, the term "solving cuts" was intended to describe the process of incorporating generated cuts—as additional constraints—into the linear programming (LP) problem. This process involves re-solving the LP relaxation with these new constraints to tighten the bounds of the problem.
In our revision, we will replace the phrase "solved these cuts" with "solving the LP relaxation with cuts included, which should more accurately reflect the process.
---
Rebuttal Comment 1.1:
Title: Follow up comments
Comment: I appreciate the effort of the authors with my questions. Here is a brief follow up on each point. I would appreciate a brief rebuttal on those.
- Q1: This is exactly what most of the scholarship on mathematical optimization does: it tailors these general-purpose techniques to a particular problem at hand; yours is no different. By acknowledging that your contribution fits in this broader theme and adapts tried and proven methods (or rediscovers them, which is fair to say since we can assume you were not aware of those), you are doing less "marketing" and better scholarship. In my opinion, that would make your paper considerably better.
- Q2: This is helpful for perspective. Please include those in the paper itself.
- __Q3: This is wrong. Effective branch-and-bound works by defining disjunctions. Not only overlaps are ineffective, but in your case they lead to an algorithm that would never terminate (if you were really branching on $x$ that way, rather than on $z$ as you probably are). By branching on $x \leq 0$ and $x \geq 0$, the second subproblem is identical to the original problem (since $x \geq 0$ if $x$ is the output of a ReLU).__
- Q4: Perfect!
- Q5: Please add a note in the main paper linking to that.
- Q6: Your comments about [5] and [6] do not cover cutting planes at all. Not worth including those references if you are not going to explain the cutting planes in those papers.
- Q7: Good.
- Q8: This is great. Please add some of this to the main paper.
---
Reply to Comment 1.1.1:
Title: We greatly appreciate your constructive feedback! Further clarifications on Q3
Comment: We greatly appreciate your timely response and found your questions and feebacks very insightful. To follow up on your questions, we want to clarify more, especially on **Q3**.
Q1, Q2, Q5, Q6, Q8: Thank you for your valuable feedback. We will be sure to add/cite/rephrase our paper based on your suggestions here. They greatly helped us to improve our paper.
Q3: We apologize again for the confusion, and we provided a more detailed answer here since there is no character limit any more. In the formulation of many prior papers (for example, beta-CROWN, Wang et al. NeurIPS 2021), $z$ was not explicitly included in the optimization formulation. However, when $x$ is branched to $x \leq 0$ and $x \geq 0$ cases, **the optimization problem is also changed**, essentially equivalent to branching $z$, as detailed below.
For an unstable ReLU neuron, before branching, the ReLU function $y = ReLU(x)$ is relaxed using the "*triangle relaxation*":
$y \geq 0$
$y \geq x$
$y \leq \frac{u}{u -l} (x - l)$
Where $l$ and $u$ are the bounds of ReLU input ("preactivation bounds"). Note that **this formulation is without $z$**, but it is equivalent to the formulation with $z$ but with the relaxed variable $0 \leq z \leq 1$ projected out, forming a linear relaxation of ReLU. (note that our paper used $\hat{x}$, but I used $y$ to make the response easier to read)
After branching, the neuron becomes a linear function in each branch. When $x \geq 0$, this ReLU neuron will be in the active region, and the **triangle relaxation is replaced by $y = x$**; when $x \leq 0$, the **triangle relaxation is replaced by $y = 0$** (following Beta-CROWN). So, the optimization formulation changed after branching on $x$. This is equivalent to branching on $z$: in the formulation where $z$ appears (Appendix A.1), when we set $z = 1$, removing redundant constraints will yield $y = x$; when we set $z = 0$, removing redundant constraints will yield $y = 0$.
So when we say we are branching on $x$, the optimization formulations will change after branching - **it is *not* simply applying the constraints $x \geq 0$ or $x \leq 0$ directly on the triangle relaxation**; that would be incorrect as you pointed out. This branching procedure will be the same as branching $z$. In our paper, our cutting plane actually requires the formulation with $z$ since cuts were added to these variables. We will follow your suggestion to say we branch on $z$ instead of $x$ to make the setting more clear, and also add the above discussion to the appendix to avoid future confusion.
Thank you again for your very constructive feedback. We hope all the questions have been addressed now, and we sincerely hope you can reevaluate our paper based on our response. Feel free to let us know if you have any additional questions for us.
---
Rebuttal 2:
Title: Feedback on Q3 and Q6: Enhancements and Clarifications
Comment: Thank you for your insightful comments and suggestions regarding Q3 and Q6. I've considered them and would like to share the following points:
For Q3, we agree that it would be beneficial to discuss branching more directly in the main paper. We will add a clarification on how this is done with the triangle relaxation in the appendix, with a brief mention in the main text to direct readers there.
Regarding Q6, we didn't go into depth in our initial response due to word limitations. However, here's the official version we are going to add to the related work section:
[5] investigates the use of parity constraints as a cutting plane method in MILP for ReLU neural networks. These constraints are instrumental in defining a convex hull of feasible assignments, thereby improving the accuracy of approximating the number of linear regions and enhancing the network's expressiveness. While parity constraints (XOR cuts) significantly improve MILP performance by separating assignments with specific properties, they can also increase computational complexity due to their potential exponential growth with the number of variables. It is noteworthy that the XOR constraint used in [5] to construct the convex hull is similar to ours. However, their cut is constructed by solving the primal problem using the MILP solver, whereas ours is derived from infeasible domains in the inexpensive BaB.
[6] delves into the use of cutting planes derived from convex hull constraints to optimize trained ReLU neural networks within MILP. These cutting planes serve as tightening constraints, effectively excluding infeasible solutions and improving solution quality. A notable feature of the proposed method is its linear-time approach to selecting the most violated constraints, which enhances optimization efficiency. By integrating these cutting planes into a partition-based formulation, the method achieves a balance between model size and tightness during optimization. However, the generation and integration of these cuts can be computationally expensive, and not all MILP solvers may support cut generation, potentially limiting their applicability in some scenarios.
Please let us know if there are any further adjustments or if you'd like to discuss this in more detail.
---
Rebuttal Comment 2.1:
Title: Last comment
Comment: This is a good discussion of [6], which in a sense extends Anderson et al (to save you some space). The description of [5] is not correct, but this is a lesser important reference, so don’t worry about it. | Summary: The paper presents BICCOS: a method to derive cutting planes for use within a state-of-the-art neural network verification framework based on branch and bound (BaB). Given verified (UNSAT) subproblems, BICCOS tries to find a subset of the employed branching choices that led to the verification result, and applies a cutting plane that prunes this subtree from the rest of the BaB procedure. Engineering improvements (going over multiple BaB tree and branching choices in parallel as a pre-processing step) are also presented. The experimental results suggest moderate improvements upon the state-of-the-art over the considered benchmarks.
Strengths: The idea behind BICCOS is fairly simple, yet relatively novel in the context of neural network verification. Given the additional overhead linked to the "strengthening" procedure (recomputing bounds after removing branching decisions), which is required for the overall algorithm, one may think that the overall approach may not pay off. The experiments show that it does, although somewhat marginally, I would believe.
Weaknesses: **Presentation.**
The paper feels quite rushed, and the quality of the presentation definitely needs to improve to meet the NeurIPS bar.
The figures are fairly small (especially Figure 2) and fairly hard to read on paper. I would suggest that the authors remove the shadows too, which make things harder. The Tables are also fairly hard to read on paper. The text still has some typos (e.g., "BICwhere" in line 139). The example from Figure 1 does not correspond to the text in lines 160-166 (x_3 >= 0 vs x <= 0) or to Figure 2a. In lines 308-310 the text suggests that the comprehensive BICCOS configuration performs the best in all cases: this is not what appears from Table 3.
**Feasibility.**
This is linked to the presentation but it's important enough to stand as a separate point. Page 4 repeatedly speaks of infeasibility in a context where I think it's technically incorrect. I think that the fact that a subproblem lower bound is positive does not imply infeasibility: it could very well be that both its lower and upper bounds are positive (with UB > LB), simply suggesting that the minimum for that subproblem (but of course not necessarily for the original problem) is positive. This means that the counter-example search is infeasible, but not the variable assignment (the series of branching decisions). In order to prove infeasibility of the subproblem, one would need to either show that the local UB is smaller than the subproblem LB, or show that the subproblem LB would go to infinity in the limit for iterations (that is, the underlying dual problem is unbounded). All the arguments being made still apply even for feasible yet verified subproblems: the goal is simply to exclude BaB subregions which we know already will lead to positive lower bounds (hence pruning the tree). But this terminology should be adapted to avoid any confusions.
**Results.**
While the fact that the proposed approach works is interesting (see *strengths* above), I do not think the presented experimental results are particularly impressive. Most of the improvements over GCP-CROWN (or Beta-CROWN, when GCP-CROWN can't be applied because of scalability issues) are fairly small. Furthermore, more granularity in the results would be needed (see questions). While this should not be a problem for acceptance on its own, I think it is when combined with the presentation issues above.
--------
**Post-discussion.** I am increasing my score to 6 following the discussion with the authors. I encourage them to acknowledge the shortcomings of the proposed approach in the next version, and to improve the presentation as discussed.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1) It is repeatedly claimed that the procedure is specific to NN verification, but I think such an approach would apply more generally to any BaB procedure (finding subsets of branching decisions that led to a positive verification result and using that as cutting plane), or at least on any BaB procedure for MILPs. Could you please elaborate on this?
2) As commonly done in previous work (for instance, Beta-CROWN), plots showing the number of verified properties within a given runtime are needed to fully assess the trade-offs associated to the proposed approach. For instance, how much does it slow verification down on easier properties?
3) It seems to me that Table 2 reports the best configuration across those in Table 3, for each BICCOS row. Could the authors clarify this?
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Limitations are appropriately addressed in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and valuable questions. We want to clarify a few key misunderstandings about feasibility and results. We hope the reviewer can reevaluate our paper based on our response below:
- For Weakness
* Presentation.
For presentation, we will fix typos and adjust format according to the reviewer's feedback and revise the text to accurately reflect the results shown in Table 3, clarifying that the comprehensive BICCOS configuration does not perform best in all cases.
* Explanation of feasibility.
We realize the potential for confusion. Our formulation is in line with your last statement: We are only interested in the regions that may potentially contain adversarial examples, i.e. there exist inputs $x$ such that $f(x) <= 0$. Therefore, if the lower bound does become positive, the existence of adversarial examples in this subdomain can be excluded (infeasible).
Generally, in neural network verification, the safety property (non-existance of adversarial examples) is negated and posed as a satisfyability problem. Tools then report SAT if adversarial examples exist and UNSAT if the safety property holds. This implies that the input $x$ is restricted not only to the given input area, but also to those $x$ where $f(x) <= 0$. If no adversarial example exist, no $x$ with $f(x) <= 0$ exists, so the assignment becomes infeasible. Often, this constraint is only used indirectly: First, a lower bound of $f(x)$ is computed. Then, if it is positive, this implies that the underlying assignment is in fact infeasible, as the constraint $f(x) <= 0$ would always be violated. There is also work [1,2] on directly incorporating this constraint into the optimization process.
While the implicit conversion from “lower bound > 0” to “infeasible” is common in the neural network verification community, we recognize the need to make this step explicit. We will rewrite the respective sentences accordingly to avoid confusion, but we want to emphasize that the our existing theoretical results are sound and not affected by these changes.
[1] Kotha, S., Brix, C., Kolter, J. Z., Dvijotham, K., & Zhang, H. (2023). Provably bounding neural network preimages. Neurips 2023.
[2] Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, and Lijun Zhang. Improving neural network verification through spurious region guided refinement. Tools and Algorithms for the Construction and Analysis of Systems, 2021b.
- Q1. Can BICCOS be applied to any BaB procedure?
BICCOS is specialized to the branch-and-bound procedure of most SOTA neural network verification tools. While neural network verification constitutes a sub-problem of the more general MILP problem set, the respective tools have been tuned to its specific kind of problems.
Specifically, the ReLU activation functions in neural networks are difficult for regular MILP solvers to handle, as they require extensive branching to cover all possible combinations of assignments. Neural network verification tools have developed specialized techniques to deal with those non-linear activation functions by overapproximating them and - crucially - improving this overapproximation iteratively using GPU-acceleration.
However, by tuning the neural network verification tools toward this specific subset of tasks, they have become non-ideal (or impossible) to apply to regular general verification problems. Therefore, BICCOS cannot be directly applied to generic MIP problems not relavent to non-neural network verification.
- Q2. How much does BICCOS slow down the verification on easier properties?
Thank you for pointing out the need for plots to demonstrate the trade-offs of our approach, in response to your query, **Fig.2 in the pdf** file illustrates the number of verified properties within various runtime thresholds. This visualization helps to clarify the performance trade-offs associated with our approach. As indicated in the figure, the slowdown experienced with our method is relatively minor for simpler properties, which are often verified before the implementation of cuts. For more complex instances, however, the benefits of our approach become more evident, with a notable improvement in time efficiency for verification. This trend suggests that while our method introduces a slight delay in simpler cases, it significantly enhances performance on more challenging properties, providing a net gain in efficiency across a diverse set of scenarios. We believe that this balanced approach is beneficial for practical applications where varying levels of difficulty are encountered.
- Q3 Does Table 2 report the best configuration?
Thank you for your observation. Table 2 indeed reports the best configuration for each BICCOS row as identified among those listed in Table 3. This approach ensures that the reported settings are optimal for each benchmark. For instance, large models perform best with MTS, while small models benefit from MIP cuts, etc., on each dataset. This methodology is consistent with common practices in the field. E.g. in the VNN-COMP, teams often fine-tune their tools for each benchmark set. Crucially, we note that we did not explore a large set of hyperparameters, and using all BICCOS features (MILP cuts, constraint strengthening, multi-tree search) is best for all but 2 benchmarks, where it is outperformed by the tuned version (multi-tree search disabled to avoid its overhead) by only 0.5 percentage points.
We thank you again for the valuable comemnts and we hope our weaknesses (especially on infeasibility) has been addressed. We hope you can reevaluate our paper based on our response. Thank you.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I thank the authors for their response. I appreciate the willingness to improve the presentation of their work in a future version, and thank the authors for the clarification on their use of feasibility.
Unfortunately, I am still leaning towards rejection, as I still believe the experimental improvements to be somewhat marginal, and I still think that, presentation-wise, the submission feels a bit rushed.
I understand that it is common practice in VNN-COMP to tune and engineer a framework to a given setting, but in my own view the goal of a paper is slightly different. The shortcomings of the presented approach should have featured more prominently, along with a comprehensive explanation as to why something would not pay off in a given setup. Instead, the submission appears to be seeking to sweep the shortcomings of the full configuration under the rug. For instance, line 309 even states "The comprehensive BICCOS configuration, incorporating all optimizations, achieves the highest verified accuracies across models.", which is either incorrect or misleading.
Related to this, I think the paper should have included a more prominent description of the overhead of the framework on easier properties. I appreciate the inclusion of Figure 2 in the response, albeit I think it is incomplete and would have read better in a log scale over time (as it is, the overhead on slow properties is hard to quantify as a share of runtime). An updated Figure 2, also featuring other baselines such as GCP-CROWN, which is expected to perform much better than Beta-CROWN on harder properties, should definitely appear in the next version of the work.
---
Reply to Comment 1.1.1:
Title: Discussions on our presentations and results (part 1/2)
Comment: We are very grateful for your timely response. Following your constructive advice, we would like to clarify a bit more about the presentation of our paper and the significance of our results.
> the experimental improvements to be somewhat marginal
We want to point out that the room for improvement for many benchmarks is not big—the **verification lower bound is quite comparable to the PGD upper bound**, so a massive improvement cannot be shown if we directly read these numbers. For example, in Table 2, we have MNIST CNN-A-Adv (74.0% vs 76.5%), CIFAR CNN-A-Adv (49% vs 50%), CIFAR CNN-A-Adv-4 (48.5% vs 49.5%), and CIFAR CNN-A-Mix-4 (56.5% vs 57.5%). Although these standardized benchmarks have been widely used in the literature, they have only a few percentage points left for improvement.
In fact, if you look at the gap between the lower and upper bound, we did get a quite pronounced improvement. For example, for MNIST CNN-A-Adv, GCP-CROWN has a **4.5%** gap between lower and upper bound, but we have only **2.5%** gap. That is a ~44% improvement on reducing this gap. Also, in Table 1, on oval21, we completely **close the gap** between lower and upper bound; on cifar10-resnet, the number of unsolved instances (gap) is **reduced from 9 to 6** compared to GCP-CROWN; on cifar100-tinyimagenet, the gap is **reduced from 25 to 18**.
**The verification community has been working hard to close this gap** (see [ref. A] below, Intro section), and in fact, the few instances remaining in each benchmark reported here, are all very challenging ones. For example, in GCP-CROWN paper, which completely solved the oval20 benchmark (their Table 1), only improves the verified instances from 98% to 100% (CIFAR-10 Wide) and 97% to 100% (CIFAR-10 Base). Number-wise, it is just a "marginal" (a few percentage points) improvement similar to the improvements we report, **but it is actually quite a big achievement since no algorithm could solve these remaining hard instances**. Similarly, in [ref. A], which aims to improve the upper bound to close the gap, they also only demonstrated improvements on very few hard instances - **their improvement is less than 0.5%** if evaluated on the entire dataset as we did.
[ref. A] A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks, Zhang et al., ICML 2022
> the submission appears to be seeking to sweep the shortcomings of the full configuration under the rug.
> line 309 even states "The comprehensive BICCOS configuration, incorporating all optimizations, achieves the highest verified accuracies across models."
Following your suggestions, we will rephrase the sentence as "Our results show that the addition of BICCOS cuts is overall beneficial. When a MIP solver is feasible, BICCOS can be combined with the MIP cuts in GCP-CROWN to potentially further improve the verified accuracy of GCP-CROWN, demonstrating the quality and effectiveness of our cuts. When MTS is used, it may further improve the verified accuracy on some benchmarks, but the overhead of MTS may reduce the verified accuracy on some benchmarks."
We will also replace the numbers under the BICCOS column in Table 2 with the numbers with MTS and MIP cuts enabled (if an MIP solver can scale to this setting), even in the case where MTS slows down verification due to overhead. In fact, we have optimized our software further to reduce the overhead of MTS, and the potential overhead has now become smaller. Eventually, we will enable all BICCOS components to be the default in our to-be-released verifier. We will produce a message to users when MTS introduces too much overhead compared to the overall timeout threshold and suggest users turn off this feature. | Summary: The paper extends GCP-CROWN, an existing toolkit for the verification of neural networks which is based on GPU-accelerated bound propagation combined with a branch-and-bound (BaB) approach. The strength of the existing algorithm is its ability to incorporate cutting planes into the bound propagation process. GCP-CROWN uses cutting planes generated by a Mixed Integer Linear Programming (MILP) solver which is run in parallel to the bound propagation, however, MILP solvers generally do not scale to large problems and only generate generic, problem-independent cutting planes.
The authors propose a new approach to generate cutting planes called BICCOS which works by exploiting information from verified branches in the BaB tree. Once a branch is verified, the idea is to remove a number of constraints from the branch in an attempt to obtain a subset of constraints that is sufficient for obtaining a "verified" result. If successful, a cut which is valid for all other branches in the BaB tree can be generated based on these constraints. The generation of cuts is run during the normal verification process, but the authors propose adding a presolve step that initially generates multiple shallow BaB trees in an attempt to create a pool of cuts before starting the standard BaB phase from GCP-CROWN.
The experimental evaluation shows that BICCOS scales well, is able to generate cuts for problems that the MILP solver employed in GCP-CROWN can't scale to, and outperforms many competing tools.
Strengths: - Neural Network Verification is a relevant research topic
- A cut generation method that scales to larger networks as well as cuts that are problem-specific and less generic than those generated by a MILP solver are useful contributions.
- The method outperforms most other toolkits in the experimental evaluation
Weaknesses: - The work is somewhat incremental compared to GCP-CROWN
- When comparing the performance of GCP-CROWN and BICCOS (base) in Table 3, this seems to indicate that the newly introduced cuts are weaker than the MILP cuts. Including presolve (BICCOS(with multi-tree)) improves performance compared to GCP-CROWN in only some instances. It's good to have a cut generation method which scales to larger networks, but the method would be a lot stronger if the BICCOS cuts alone outperformed the generic MILP cuts.
- A comparison with existing cuts, such as the ones from [1], is missing. In the related work section the authors only state that Venus (which implements the cuts from [1]) delivers weaker empirical results than their approach. However, the contribution of this paper are the new cuts and comparing the BICCOS cuts in a GPU-enabled bound propagation framework to the cuts by Botoeva et al. implemented in a MILP-based verifier (which can't make use of GPU acceleration) is not fair since it is well-known in the literature that bound propagation frameworks outperform MILP verifiers. To assess whether the BICCOS cuts are more effective than the cuts in [1] they should be implemented in the same general framework. Without this comparison it is hard to judge the contribution of this work.
- Appendix A2: This part of the appendix is either very unclear or has a lot of typos. There are a lot of expressions like $\sum_{i \in \mathcal{Z}^+} z_i + \sum_{j \in \mathcal{Z}^-} (1 - z_i)$. Do the authors mean to write $\sum_{i \in \mathcal{Z}^+} \left ( z_i + \sum_{j \in \mathcal{Z}^-} (1 - z_i) \right )$? If so, the extra set of brackets should be added. If the authors actually do mean to write $\sum_{i \in \mathcal{Z}^+} \left ( z_i \right ) + \sum_{j \in \mathcal{Z}^-} (1 - z_i)$ then the $z_i$ in the second sum makes no sense, should this be $z_j$ then? This unclarity/mistake appears in the two equations between line 516 and 517 (which aren't labeled so I can't refer to them), in line 518, in the first equation between line 518 and line 519 and the second equation between line 518 and 519 (these also aren't labeled so I can't refer to them directly).
### Minor points
- Line 10-11: cutting planes constraints --> cutting **plane** constraints
- Line 60: proposed --> **propose**
- Line 67: discussed --> **discuss**
- Line 98: The paper states "If $f^* > 0$, it is unclear whether the property might hold" --> Shouldn't this be "If $f^* < 0$" (i.e. the inequality being flipped?)
- Line 98: proerty --> pro**p**erty
- Line 109: Remove "the" (sentence should be "most existing NN verifiers use cheaper methods such as (...)")
- Line 139: Remove **BIC** at the beginning of the line
- Line 143: they --> **the authors**
- Figure 2a): these subproblems already includes the constraint --> these subproblems already **include** the constraint
- Line 194: along --> **alone**
- Line 198-199: using as fewer variables as possible --> using as **few** variables as possible
- Line 202: we performs a re-verification step, where it recomputes the lower bound --> we **perform** a re-verification step **which** recomputes the lower bound
- Line 208: we propose --> **W**e propose
- Table 3: For MNIST, CNN-A-Adv the "Ver%" for BICCOS (with MIP cuts) is $0.71$. Is this a typo, what is the correct number?
- Appendix A.2: The authors derive two equations by "taking a negation of this equation". I find this part a bit unclear, do they mean that if the equation holds then this is equivalent to one of the two new equations holding? Or do both of the two new equations need to hold?
- Line 547-549 in Appendix A4: The authors write "on the CIFAR CNN-B-Adv model, BICCOS with multi-tree search explores $2.54 \times 10^3$ branches, significantly lower than the $1.57 \times 10^3$ branches explored by the base BICCOS version." However, as far as my understanding goes, $2.54 \times 10^3$ is not a **smaller** but a **larger** number than $1.57 \times 10^3$ so the sentence here makes no sense. I tried to double-check this but the same numbers are reported in the table below (I assume that "BICCOS with multi-tree search" in the text is the same as "BICCOS (with Presolve)" in the table). Could the authors clarify what their point is here?
### References
[1] Botoeva, E., Kouvaros, P., Kronqvist, J., Lomuscio, A. & Misener, R. (2020) Efficient Verification of ReLU-Based Neural Networks via Dependency Analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence. 3 April 2020 pp. 3291–3299. doi:10.1609/aaai.v34i04.5729.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Table 3: Why does BICCOS with multi-tree perform worse than BICCOS (base) for CNN-A-Adv-4, do the authors have an explanation for this?
- Why is BICCOS run with a shorter timeout compared to other algorithms? Also why is MN-BaB run with a 600s timeout and then $\beta$-CROWN, GCP-CROWN and BICCOS are run with a 200s timeout in Table 2, but then in Table 3 the footnote seems to suggest that BICCOS is run with a 200s timeout while $\beta$-CROWN and GCP-CROWN use a longer timeout? This seems inconsistent. The experiments would be more informative if all algorithms were run with the same time budget as is usual practice e.g. in VNNComp.
- In Table 1/2 does BICCOS use cuts from a MILP solver (if the solver scales to the problem) or only the newly introduced cuts?
- Table 3: For CNN-A-Mix-4 the BICCOS-MIP approach has a verified accuracy of 56.5% but BICCOS-all has 56%. What is the authors' intuition here regarding why adding the multi-tree approach worsens performance, do they think this is an issue/can be avoided?
- Line 302-310: Could the authors clarify what each variant of the algorithm includes here? The text makes it sound a bit like the authors start from BICCOS (base) and then gradually add other components, but does BICCOS (with multi-tree) also include MIP cuts? If so, the performance drop from 52 to 51.5% on e.g. CIFAR CNN-B-Adv would be surprising, any explanations?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are sufficiently addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and valuable questions. We hope the reviewer can reevaluate our paper based on our response below:
* For Weakness.
- W1. The work is somewhat incremental compared to GCP-CROWN
GCP-CROWN and our work make orthogonal contributions. GCP-CROWN does not provide an efficient way to find cuts as we do. GCP-CROWN primarily focuses on solving cuts provided to it without engaging in the cut-finding process. We just use GCP-CROWN as a solver to do bound propagation with constraints. On the other hand, our work introduces an efficient algorithm specifically designed to find these cuts. This distinction highlights a significant contribution of our method: the ability to identify and generate cuts, not just solve them. By developing an algorithm that efficiently finds cuts, we add a valuable tool to the existing framework, enhancing the overall effectiveness of neural network verification.
- W2. Performance Comparison of MILP cuts and BICCOS.
We agree that MILP solver cuts are inherently powerful due to the solver's comprehensive approach from a lot of previous works [1] during the past decades. However, a critical limitation of MILP solver cuts is their scalability. As the network size increases, i.e., above 4M parameters, the complexity and computational resources required for MILP solvers to generate cuts even initialize the model become prohibitively high. In our experiment, we found MILP could not scale to large networks like cifar100 (5.4-31.6 M) and tiny-imagenet (14.4 M). This limits their practical applicability to larger neural networks.Our BICCOS method, while producing slightly weaker cuts compared to MILP, offers significant advantages in terms of scalability.
[1] Wolsey, L. A., & Nemhauser, G. L. (2014). Integer and combinatorial optimization. John Wiley & Sons.
- W3. Comparison with Venus2
**Fig. 1 in the PDF file** shows a comparison with Venus2. We emphasize on making a fair comparison on the strengths of cuts. Since Venus2 uses an MILP solver to process its cuts, in these experiments we do not use the efficient GCP-CROWN solver. Instead, we also use an MILP solver to handle the BICCOS cuts we found. This ensures that the speedup we achieve is not coming from the GPU-accelerated GCP-CROWN solver. Since our cut generation relies on the process with BaB, we first run BICCOS to get the cuts, and then import the cuts into the MILP solver.
We note that Venus uses branching over ReLU activations to create multiple strengthened MILP problems. On the other hand, we only create one single MILP and do not perform additional branching. Therefore, our MILP formulation is weaker. The fact that we can outperform Venus2 anyway underlines the strength of the generated cuts by BICCOS.
- W4. Appendix A2 function typo.
A We apologize for the confusion caused by the typo. We intended to write $$ \sum_{i \in \mathcal{Z}^{+}} z_i + \sum_{i \in \mathcal{Z}^{-}} (1 - z_i) \leq |\mathcal{Z}^+| + |\mathcal{Z}^-| - 1 $$
* For minor issues
- Typos. Thank you for the detailed review, we will fix the typos. The correct number is 71, not 0.71.
- Explanation in Appendix A.2. When we refer to "taking a negation of this equation," we mean that if the original equation holds, it leads us to the two new conditions. However, given the context and the bounds on $z$, only the first equation needs to hold.
- Explanation in Appendix A4. The sentence was meant to highlight the efficiency of the multi-tree search despite exploring a larger number of branches because **the multi-tree search domains are also counted in the total number of branches**. The multi-tree search method inherently examines multiple trees, thereby increasing the number of branches explored. These domains represent different subproblems that are explored simultaneously. Although this increases the branch count, the exploration within each domain is optimized, leading to faster convergence and overall reduced computation time. We will correct the text to reflect this explanation accurately. Thank you for your careful review and for helping us improve the clarity of our paper.
* For questions
- Q1. Why does BICCOS with multi-tree perform worse than BICCOS (base) for CNN-A-Adv-4?
For the CNN-A-Adv-4 dataset, BICCOS with multi-tree search performs worse than BICCOS (base) primarily due to the additional time cost associated with the multi-tree search approach. This dataset contains 200 instances, and each instance has 10 predicted classifications, requiring us to validate 9 properties so in the worst case we have to perform a multi-tree search for each property.
- Q2. Timeout difference in comparisons
We apologize for the confusion. The MN-BaB results were copied from the VNN-COMP 2022 report. We did not reproduce those ourselves, though we use the same hardware for our experiments as they did. In table 3, BICCOS, beta-CROWN and GCP-CROWN all have a timeout of 200s. We will update the table caption accordingly. The increased timeout for MN-BaB may increase the percentage of verified instances. However, we can still outperform it.
- Q3. Question in Table 1 \& 2.
In Table 1/2,BICCOS uses cuts from BICCOS base + multi-tree search + MIP cuts from CPLEX
- Q4. Question in Table 3.
In table 3, the performance degradation by multi-tree search is caused by the associated computational overhead.
- Q5. Clarification what each variant of the BICCOS uses
- Biccos base: only contains the cut inference during regular bab,
- BICCOS (with multi-tree): includes BICCOS base but not MIP cuts,
- BICCOS (all): includes base multi-tree and MIP cuts.
---
Rebuttal 2:
Comment: Thank you very much to the authors for the clarification of the points that I raised and for answering my questions. I appreciate the thorough response regarding concerns W1/W2 and the explanations regarding what the main contributions are from your side.
The additional comparisons as a response to W3 are very useful, thank you for this. I think it would be helpful if this was included in the appendix of the paper.
---
Rebuttal Comment 2.1:
Title: We thank the reviewers again and please let us know if you have any further questions before the discussion is closed
Comment: Thank you for your constructive feedback and for acknowledging our responses. We're glad that the additional comparisons addressing W3 were helpful, and we agree that including them in the appendix would be beneficial. We will make sure to incorporate this in the final version of the paper. We hope these updates might lead you to reconsider your score. Your insights are much appreciated and please let us know if you have any further questions before the discussion is closed
Best Regards,
Anonymous Authors | Summary: This work proposes a new approach to produce cutting planes in the context of branch-and-bound-based solvers for neural network verification. Whenever an infeasible subproblem is encountered in branch-and-bound, this method generates a cut from the conflicting assignment that led to the infeasible subproblem (initially redundant w.r.t. the remainder of the tree), and attempts to strengthen this cut by heuristically dropping some of the assignments and rechecking for infeasibility via the lower bound. This is further enhanced by using several parallel shallow trees to produce stronger cuts. This method is implemented on top of the $\alpha,\beta$-CROWN framework and provides meaningful computational improvements compared to various baselines on a set of benchmarks.
Strengths: This paper provides a solid contribution to the area of neural network verification methods. It builds on top of cut-based branch-and-bound verification solvers by presenting a method to quickly infer cuts, which appear to be novel and computationally useful. In particular, they nicely leverage the fact that there are fast methods to produce bounds in NN verification, allowing us to quickly recheck cut validity. This makes for a clean and simple method, which has the advantage of not being too complicated to integrate with an existing cut-based BaB verification solver.
Both the set of benchmarks and baselines are reasonably extensive, and we can observe improvements in verifiability that are sufficient for a meaningful computational contribution, especially in the CIFAR instances in both VNN-COMP and SDP-FO, without much additional cost in computational time. The paper is overall clearly written and the figures and algorithms are helpful.
Weaknesses: In some of the benchmarks, the computational results may be somewhat incremental, but overall they are positive. There is potential room for improvement in parts of the methods (see Questions section below); in particular, it is not clear if the authors have explored variations of their constraint strengthening approach. In general, I do not see major weaknesses in this paper, though minor concerns are expanded on below.
Technical Quality: 4
Clarity: 4
Questions for Authors: General comments:
1. I see that your variable-to-drop selection heuristic is based on their improvements to the lower bound in the tree. While this seems reasonable as a fast heuristic since you already have all the data, these improvements are not independent from each other since they are constrained over previous assignments, and thus there is some bias depending on the depth (i.e. if the assignments were done in a different order in the tree, you'd select different variables, but the constraint is the same). This makes me wonder if there is a better heuristic. Have you considered other approaches?
2. It seems that you try to continue strengthening the cuts based on a fixed drop percentage. I am curious if you have tried something more like a binary search approach over the verification bound? You can also add some sort of tolerance on the bound to stop searching when you are close enough to zero.
3. In Algorithm 1, lines 12-13, you add the cut, and then try to strengthen it again. It sounds like you could just add the best cut here, instead of adding all cuts throughout strengthening, since the best cut dominates the other ones.
4. I appreciate the explanation of the differences between this method and [7] and DPLL/CDCL in the Related Work section, as both of these were in my mind as I was reading the paper. However, I'd like to comment that the reasoning for CDCL makes it sounds like it is impossible in practice to use learned clauses from CDCL as cuts; rather, I believe this is more of a challenging engineering task than not being practically viable. Much like branch-and-bound has been customized for NN verification in a more effective way than generic MIP solvers, I do not see a reason why one would not be able to customize ideas from CDCL for your cut generation procedure. The learned conflicts can be naturally translated to the type of cuts you have in Sec. 3.1 of this paper (except already stronger) and then further strengthened in the same way. Given that CDCL is a tried-and-true method to produce good conflicts, I suspect that this might lead to better cuts and may be interesting future work that would already fit very well with what you have so far.
5. I would have liked to see some data to better understand the cuts generated. In particular, what is the fraction of infeasible nodes from which you were able to produce a (non-redundant) cut, and what was the total number of cuts? What was the average number of assignments that you were able to drop? This sort of data would reveal more information on the overhead of these cuts and how easy or hard it is to find them. I wonder if some cut selection procedure would make sense here, but I do not know if you have many or few cuts.
Comments on text:
6. In the MIP section, it would be useful to mention that MIP is also based on branch-and-bound, and modify the last sentence to include why a custom branch-and-bound is more effective than MIP branch-and-bound in practice (e.g. because MIP is based on solving LPs which can be expensive, etc.).
7. In Sec 3.1., I suggest preparing the reader to the fact that the cuts from Sec. 3.1 are redundant w.r.t. the tree until strengthened, instead of waiting until Sec. 3.2 to mention that. This is a question one would naturally have while reading Sec. 3.1, and it would make the reading easier if they already know the answer to that question.
8. Figure 2 is a bit too small to read especially when printed out. If possible, please make it larger.
9. Can you include in the text that your bound computation when strengthening a cut includes the global set of cuts? I see it in Algorithm 1, but I didn't see it in the text.
10. Could you include exactly how the trees differ from each other in the multi-tree approach? The text mentions that they explore a different set of branching decisions, but not how.
11. Could you expand on which methods/baselines use GPUs, and how they are used? This is important for a proper comparison.
12. Typos: Remove "BIC" in line 139, capitalize "we" in line 208, "instances" and double period in caption of Table 4.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The Limitations section is reasonable, covering cases where this approach does not work well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for your constructive feedback and for correctly recognizing our key contributions. We appreciate your support and very helpful feedback. We provided additional experiments as requested and clarified the key questions below:
* Q1: Do You Consider Other Drop Heuristics?
We acknowledge that while our current approach, based on improvements to the lower bound in the tree, is efficient given the available data, it does introduce dependency biases due to the constrained nature of previous assignments. There might exist better heuristics that can be explored in future work, but our work is the first of this kind and we want to start with a simple and effective heuristic.
We have considered and tested a random drop heuristic and KFSB score heuristic as alternatives. Our benchmark results across SDP and oval22 indicated no significant improvement in performance compared to beta-CROWN, please refer to the following table. These results suggest two heuristics do not provide a robust alternative in this context.
| Dataset | Beta CROWN | Random Drop | KFSB Score | Influence Score |
|------------------|------------|-------------|------------|-----------------|
| cifar_cnn_a_adv | 44.50% | 44.50% | 44.50% | 47% |
| cifar_cnn_a_mix | 41.50% | 41.50% | 41.50% | 45.5% |
| cifar_cnn_b_adv | 46.50% | 46.50% | 46.50% | 49% |
* Q2: Exploring Binary Search Approaches for Cut Strengthening
It’s correct that our current method employs a fixed drop percentage. We did not investigate a binary-search-based approach, as this would increase the number of verification queries. Instead, we recursively tighten (line 205) the cut further should the first query succeed. If it fails, re-introducing constraints would reduce the benefit of BICCOS, while increasing the associated overhead. However, we do acknowledge that this could be tuned further in future research. We did explore dropping only 30% of the constraints at a time, with no immediate benefit. This demonstrates that we are not sensitive to this hyperparameter.
* Q3: Optimizing Cut Addition in Algorithm 1 by Selecting Only the Best Cut
We agree the strengthened cuts make the previous cuts obsolete. We will replace lines 12 and 13 of the algorithm with
```
recursively_strengthened_cuts = constraint_strengthening(f, C_{new}, C_{cut}, drop_percentage)
if strengthening_was_successfull:
C_{cut} <- C_{cut} \cup recursively_strengthened_cuts
else:
C_{cut} <- C_{cut} \cup strengthened_cuts
```
Intuitively, if the next rounds of constraint strengthening produce a better cut, we use these better cuts rather than the currently inferred cuts.
* Q4: Integrating CDCL Learned Clauses as Cuts for Enhanced Cut Generation
We agree that CDCL can be used to generate cuts even though this will be a challenging engineering task. This is an interesting future work and we will rephrase our paper accordingly. Our current approach focuses on generating cuts and then strengthening them, which has proven to be both easy to implement and effective. While CDCL could enhance the initial cut generation, it does not inherently provide a mechanism to strengthen these cuts using the solver, a novel step in our paper that is crucial for performance.
* Q5: Data Analysis on Cut Generation Efficiency and Feasibility in Neural Network Verification
We evaluated the UNSAT nodes and calculated the percentage that resulted in the generation of cuts. The table below provides a detailed breakdown:
| Dataset | Avg. # Cuts Generated | Avg. UNSAT Nodes | Fraction of Cuts/UNSAT Nodes |
|-------------------|---------------------|------------------|------------------------------|
| cifar_cnn_a_adv | 345.78 | 763.70 | 0.4528 |
| cifar_cnn_a_adv4 | 86.33 | 2165.11 | 0.0399 |
| cifar_cnn_a_mix | 121.56 | 1766.77 | 0.0688 |
| cifar_cnn_a_mix4 | 25.93 | 807.19 | 0.0321 |
| cifar_cnn_b_adv | 105.79 | 808.00 | 0.1309 |
| cifar_cnn_b_adv4 | 106.14 | 1800.21 | 0.0590 |
| mnist_cnn_a_adv | 11.24 | 1351.67 | 0.0083 |
Regarding the number of dropped assignments, this is related to the number of rounds of BaB (Branch and Bound) using our heuristic. Our setting has a drop ratio of 0.5 if the Lagrange factor of the neuron is 0, so the average number of dropped assignments will be less than 50%. This number decreases as BaB goes deeper and the domain becomes more refined.
Given this data, we agree with your suggestion that a cut selection procedure could be beneficial. We are implementing the new selection algorithm mentioned in **Q3** above and will report back.
* Minor Issues
- Global set of cuts in strengthening: Thank you for noting this omission. We'll add explicit mention in the text that our bound computation during cut strengthening includes the global set of cuts, aligning with Algorithm 1.
- Introducing cut redundancy earlier: We agree this would improve readability. We'll revise Section 3.1 to briefly mention that the initial cuts are redundant until strengthened, providing context for Section 3.2.
- Multi-tree approach differences: We'll expand on how the trees differ in the multi-tree approach. Specifically, we'll clarify that in the first round, we select different neurons to start each tree, leading to diverse branching decisions. This will lead to different exploration paths.
- GPU usage in methods/baselines:
- CPU: nnenum, Marabou, Venus2.
- GPU: ERAN, OVAL, VeriNet, MN-BaB, PRIMA.
- Typos: we will fix typos and adjust the format according to the reviewer's feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read the reviews and rebuttals and will keep my "Accept" rating. I do agree with other reviewers that this work is more incremental in nature, but I believe that the contribution is still sufficiently significant for acceptance. In my opinion, the ideas in this paper are interesting enough to publish and the presentation issues appropriately raised by other reviewers can be fixed for the final version. While other reviewers bring up topics that I had not considered, I believe the responses are satisfactory.
A couple of minor comments:
* *On binary search:* You can always limit the number of verification queries in your binary search. While I believe it is ok to leave it for future research, I suspect that it would work better than the one proposed in the paper. If you do move forward with future work in this direction (e.g. CDCL + your strengthening), I suggest considering this approach.
* *Relationship to existing methods (based on other reviews):* There is some overlap with known methods from the SAT/MILP literature, but the methodological novelty here is the strengthening in the context of NN verification. During my review I did a quick search over the MILP literature to see if this strengthening approach already existed, because it is actually a rather simple method, and while there are similar approaches, I was surprised to see that it does not exist exactly in the way that is done here. From the SAT/CP literature, DPLL/CDCL/no-good learning is probably the closest one, but it is not quite the same.
A key here is that in NN verification we have these very fast, GPU-accelerated lower bounds (whereas MILP requires solving LPs). This opens the door to approaches like these which leverage these lower bounds, and it seems effective for verification, even if incrementally. This is also a reason why a custom B&B makes sense for NN verification. More speculatively, another reason why I think this works well is that in verification we focus on proving infeasibility, and in a way instances are expected to be tightly constrained. In particular, extracting cuts from the B&B tree is probably a good idea because ReLU LP relaxations in NN verification are very loose in deeper layers. While this can be naturally translated into other problems in MILP, I am skeptical it would be as effective in typical problems from the Operations Research community. My view of this paper is that it is an early step into incorporating CDCL-like ideas into the custom B&B framework for verification, much like ideas from MILP were incorporated in the past in the form of B&B and cuts, and I believe this is a positive step forward.
Given that all improvements in the rebuttals are made (including the extra analysis that you made for this review, and especially better contextualization w.r.t. MILP, SAT, and other previous work as requested by both myself and iWFh), I support this paper for acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you for the review
Comment: Thank you for your thoughtful comments and support for the paper.
We appreciate your insights regarding the binary search method and its potential application in future work. Your feedback on the relationship to existing approaches, especially in the context of NN verification, is constructive. We will ensure that the final version addresses the presentation issues and provides better contextualization concerning MILP, SAT, and other related work.
We're glad to have your support for acceptance and look forward to refining the paper accordingly.
Best Regards,
Anonymous Authors | Rebuttal 1:
Rebuttal: Submission of figures of added experimental results
Pdf: /pdf/1b7b808d5f9be625d16be50ca93707eaa7e402a2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Power of Extrapolation in Federated Learning | Accept (poster) | Summary: The paper presents a new method FedExProx, a federated learning method based on proximal splitting using extrapolation. The method combines the proximal splitting approach from FedProx with the extrapolation from FedExp. FedExProx is shown to improve upon both previous methods in multiple ways, among others faster convergence rates. Then, two adaptive methods are introduced that allow to choose the extrapolation parameter without prior knowledge of the smoothness constants of the individual Moreau envelopes and $L_max$ for both the full and partial participation scenario.
Strengths: The paper introduces a new method that improves on both FedProx and FedExP in terms of iteration complexity (assuming that the functions a proxable). Further, it extends FedExP to the partial participation case and the adaptive variants do not require a local step size as opposed to FedExP.
Weaknesses: **Comparison to FedExP:** FedExProx assumes that the prox problems can be solved exactly but the analysis of FedExp takes into account the number of local iterations needed. Hence I find the comparison between the convergence rates to be unfair. For smooth problems the prox problems can be approximated very cheaply and the analysis of prox problems can typically be extended to allow for some error in the prox solution. For general convex and L-smooth functions, if one chooses $1/L$ as the prox parameter then prox problem becomes strongly convex and smooth with condition number 2 hence computing an epsilon approximation takes only a logarithmic amount of steps. In your example you argue that there exists a closed form solution for the prox. But for that same example there also exists a closed-form solution for the original problem, so I am not convinced by that argument. I think that the paper should emphasize the assumption that the prox of the function is cheaply computable.
**Extrapolation:** Assuming that the prox can be solved exactly, then based on Eq. 8, I don't see the advantage of considering $\gamma$ and $\alpha_k$ separately? Couldn't one just have one step size and compute that using one of the proposed adaptive methods directly? What is gained from decoupling the prox factor from the extrapolation parameter? In FedExp it seems that this is useful because one can decouple the local from the global step size, but since you assume that the prox can computed in closed form, there is not need to adapt the prox parameter right? The only reason I see for this being useful is if one wants to choose the $\gamma$ in order to be able to solve the proxes efficiently.
I am willing to raise my score if the authors clarify these points.
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedbacks on our paper. Here is a detailed response to the weaknesses and questions the reviewer mentioned.
- Weakness 1: `FedExProx assumes that the prox problems can be solved exactly but the analysis of FedExp takes into account the number of local iterations needed. Hence I find the comparison between the convergence rates to be unfair. For smooth problems the prox problems can be approximated very cheaply and the analysis of prox problems can typically be extended to allow for some error in the prox solution. For general convex and L-smooth functions, if one chooses as the prox parameter then prox problem becomes strongly convex and smooth with condition number 2 hence computing an epsilon approximation takes only a logarithmic amount of steps. In your example you argue that there exists a closed form solution for the prox. But for that same example there also exists a closed-form solution for the original problem, so I am not convinced by that argument. I think that the paper should emphasize the assumption that the prox of the function is cheaply computable.`
We partly agree with the reviwer on that the paper should emphasize that the proximity operator is computable. This will appear in the next version of the paper. In fact, almost every proximal algorithm needs to assume that the proximity operator is computable in some sense.
In this paper, we assume that the proximity operator is solved exactly for simplicity, as our goal is to demonstrate the effectiveness of extrapolation combined with proximal algorithms in the federated learning setting. Considering inexactness and obtaining a convergence guarantee to a neighborhood is certainly possible but beyond the scope of this paper.
For FedExP, the number of local training rounds plays a role in determining the local stepsize and if we take the largest local stepsize in the interpolation regime, then the convergence is independent of the local training round $\tau$. For FedExProx, however, the amount of local computation needed is hidded in $\gamma$ and often the larger $\gamma$ is, the harder it is to compute the proximity operator. However, this does not prevent us from comparing their iteration complexity, as the idea of local training aims to reduce the total number of training rounds and, consequently, the communication complexity. It is neither feasible to directly compare the total number of computations for the two algorithms nor meaningful from our perspective.
---
- Weakness 2: `Assuming that the prox can be solved exactly, then based on Eq. 8, I don't see the advantage of considering $\gamma$ and $\alpha_k$ separately? Couldn't one just have one step size and compute that using one of the proposed adaptive methods directly? What is gained from decoupling the prox factor from the extrapolation parameter? In FedExP it seems that this is useful because one can decouple the local from the global step size, but since you assume that the prox can computed in closed form, there is not need to adapt the prox parameter $\gamma$ right? The only reason I see for this being useful is if one wants to choose the in order to be able to solve the proxes efficiently.`
We did not manually separate $\alpha_k$ and $\gamma$ to consider them individually; this separation is inherent to the algorithm. This can be seen from the original formulation in Eq (7) of Algorithm 1. The parameter $\gamma$ is the local step size associated with each client and determines the effort needed to solve the proximity operator. Often, the larger $\gamma$ is, the more challenging the local problem becomes. Note that we do not assume the proximity operator can be solved in closed form. The parameter $\alpha_k$ is used for extrapolation. It just so happens that after the reformulation in Eq (8), their product becomes the step size for a gradient-based algorithm to minimize the average of Moreau envelopes. Note also that different $\gamma$ values correspond to different Moreau envelopes $M^{\gamma}_{f_i}$ as local objectives. As a result, $\gamma$ influences the local problems we try to solve and cannot be considered in combination with $\alpha_k$ directly.
It is fair to ask which $\gamma$ is the optimal local step size since we can find the optimal constant extrapolation parameter $\alpha_k$ for each $\gamma$. However, this requires more information about the smoothness of the average of the Moreau envelope $L_{\gamma}$, which is usually unavailable.
---
Rebuttal Comment 1.1:
Comment: > Weakness 1
I don't understand why it is neither feasible nor meaningful to compare the complexity of two algorithms that are designed for precisely the same setting, i.e. smooth, convex function in the interpolation regime. I think it is fair that you want to focus on the communication complexity rather than the local iteration complexity, but in this case this should be clearly discussed in the paper.
> Weakness 2
What I meant with my comment was precisely that the fact that the differentiation between $\gamma$ and $\alpha_k$ only becomes relevant when taking into account the interaction between the communication complexity and the amount of local steps required. So one the one hand you simply assume that you can solve these local problem or at least you do not discuss how to solve them, but then you say that having the ability to set $\gamma$ and $\alpha_k$ separately is useful to adapt to the local problems.
I've raised my score to 6.
---
Rebuttal 2:
Comment: Thank you for your timely response. We appreciate your effort in reviewing our work.
- Weakness 1: We agree with the reviewer and will include a discussion on our focus on communication complexity in the paper.
- Weakness 2: We now better understand the reviewer's concern. Indeed, we did not include such explanations in the paper. In the next version, we will provide a discussion on solving the local problems and the role of the two parameters in this case.
We will add the following discussion to the paper in its next version.
> Each local proximity operator can be solved using different oracles. In practice, clients may use gradient descent or stochastic gradient descent to solve the local problem to a certain accuracy. The complexity of this subroutine depends on the local stepsize $\gamma$. If $\gamma$ is large, the local problem becomes harder to solve because we aim to minimize the local objective itself. Conversely, if $\gamma$ is small, the problem is easier since we do not stray far from the current iterate. As the choice of subroutine affects local computation complexity, comparing it directly with FedExP becomes complicated. Therefore, we compare the iteration complexity of the two algorithms, assuming efficient local computations are carried out by the clients.
Please let us know if anything is unclear. | Summary: This paper proposes and analyzes several server-side extrapolation strategies to enhance the theoretical and empirical convergence properties of FedProx.
The authors present the convergence properties of the proposed methods for smooth convex and strongly convex problems in the interpolation regime.
Theoretical results demonstrate that the proposed methods have better dependence on the smoothness coefficient $L$ than FedExP in the general case.
Specifically, they achieve an iteration complexity of $O(L_{\gamma}(1+\gamma L_{\max})/\epsilon)$ compared to FedExP's $\mathcal{O}(L_\max/\epsilon)$. Experimental results validate the theoretical analysis, demonstrating improved performance over both FedProx and FedExP.
Strengths: - A clear and solid proof is presented in the paper. I have reviewed it and believe the result is correct.
- The authors also talk about the setting where the coefficient $L_{\gamma, \tau}$ is unknow. I think these results are useful in practical applications.
Weaknesses: 1. Assumption 2, the interpolation regime, seems too strong. Are there any other published papers that use this assumption? If $\nabla f_i(x_*)=0$ for all $i\in [n]$, it suggests that all clients in the system have identical local datasets, which is impractical.
2. The proposed method is only designed for proximal-based algorithms, which limits its application.
3. The experiments are conducted on a small dataset with a logistic regression problem. I encourage the author to conduct experiments on more complex models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The main concern is the suitability of Assumption 2. Are there any reasons for this assumption?
2. There are some typo:
- Line 212, Eq.(8), $\frac{1}{n}\sum_{i\in \mathcal{S}_k}$, Should this be $\frac{1}{\vert \mathcal{S}_k\vert}$?
- Does Eq.(23) assume full participation? It does not seem to align with the partial participation described in Eq.(7) and and the setting in Theorem 1.
- Line 868, the second equation, Should it be $\chi^2$?
The author should review the rest of the paper to address these typo.
I will raise my score if the authors address my concerns.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedbacks on our paper. Here is a detailed response to the weaknesses and questions the reviewer mentioned.
- Weakness 1 & Question 1: `Assumption 2, the interpolation regime, seems too strong. Are there any other published papers that use this assumption? If for all , it suggests that all clients in the system have identical local datasets, which is impractical.`
Indeed, the interpolation regime is a strong assumption. However, in deep learning scenarios, we are often in the overparameterized regime, which is stronger. We emphasize that the interpolation regime does not imply that all local clients have the same datasets, but rather that they share a common minimizer $x_\star$. An example is the convex feasibility problem where the convex sets $\mathcal{X}_i$ intersect. Published papers, such as [1] and [2], use the interpolation regime assumption, with theoretical justifications provided in [3].
We discuss our method's performance in the non-interpolation, non-smooth, and non-convex cases in Appendix F. Specifically, without the interpolation regime assumption, our method converges to a neighborhood of the solution, and the optimal constant extrapolation parameter is reduced by a factor of 2. This paper focuses on the interpolation regime, as we were inspired by the extrapolation technique used in projection methods for convex feasibility problems and the similarity between projections and proximal operations. Prior to our method, it was unclear whether a constant extrapolation parameter would be effective.
[1] A. M. Subramaniam, A. Magesh and V. V. Veeravalli, "Adaptive Step-Size Methods for Compressed SGD With Memory Feedback," _IEEE Transactions on Signal Processing_, 2024.
[2] N. Ion, P. Richtárik and A. Patrascu. "Randomized projection methods for convex feasibility: Conditioning and convergence rates." _SIAM Journal on Optimization_, 2019.
[3] S. Arora, S. Du, W. Hu, Z. Li and R. Wang, "Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks" _PMLR_, 2019.
---
- Weakness 2: `The proposed method is only designed for proximal-based algorithms, which limits its application.`
The benefits of extrapolation also apply to gradient-based algorithms, allowing one to choose the appropriate algorithm for the specific setting. Extrapolation with gradient-based algorithms in federated settings has already been considered in [1]. In our experiments, we use the proposed algorithm FedExP as a benchmark. Our method achieves a better worst-case convergence guarantee than FedExP. Additionally, compared to gradient-based algorithms, proximal algorithms often enjoy enhanced stability.
Our intuition behind this paper stems from the extrapolation technique used in the projection method to solve the convex feasibility problem and the similarity between projections and proximal operations. This is why we consider proximal algorithms in the first place.
[1] J. Divyansh, S. Wang and G. Joshi, "FedExp: Speeding Up Federated Averaging via Extrapolation." _ICLR_ 2023.
---
- Weakness 3: `The experiments are conducted on a small dataset with a logistic regression problem.`
We thank the reviewer for the feedback, more experiments will be included in the next version of the paper.
---
- Question 2: `The author should review the rest of the paper to address these typo.`
All typos mentioned will be corrected in the next version of the paper. We will carefully check the rest of the paper.
Specifically,
- In line 212, Eq (8), it should be $\frac{1}{| S_k |}$ instead of $\frac{1}{n}$.
- There is indeed a typo in Eq (23), we do not assume full participation here.
- In line 868, yes, there is a square missing here.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing the majority of my concerns. Given that Assumption 2 is relevant in deep learning scenarios, I recommend that the authors validate the proposed method within neural network settings in the next version of the paper.
Based on the responses provided, I am inclined to raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you! | Summary: In this paper, the authors present an enhanced version of the FedProx algorithm for federated learning, named FedExProx. Unlike the FedProx, this algorithm incorporates an extrapolation step on the server following the computation of the proximal operator on each client. The authors investigate both constant and adaptive extrapolation step-sizes and provide the corresponding convergence results. Numerical experiments demonstrate that this method surpasses FedProx in terms of convergence rate.
Strengths: This paper introduces the global step on the server side to the existing FedProx algorithm for the first time, resulting in a new algorithm called FedExProx. The analysis also includes an adaptive global step size, inspired by FedExP. The paper provides clear convergence results, and the theoretical proofs for the main theorems are also presented with great clarity.
Weaknesses: This paper has two main weaknesses. \
First, the benefits gained from the extrapolation step are not evident. As shown in Table 2 of this manuscript, the only noticeable difference in the convergence rate compared to the existing FedExP appears to be a difference in the constant factor. The order of parameters, such as $\tau$ (the number of participating clients or the total number of clients in full participation case) and $T$ (the total number of iterations), remains the same as in existing works. Given that this paper primarily focuses on theoretical contributions, this result is not sufficiently strong.\
Second, the assumptions appear to be too strong. Assumptions 2 (interpolation regime) and 3 (convexity) imply that all functions $f_i$ share the same optimal point. This is a very strong assumption, as it essentially indicates that there is no data heterogeneity, unlike existing works such as FedExP and FedProx, which allow for bounded data heterogeneity.
Technical Quality: 2
Clarity: 2
Questions for Authors: In Table 2, the authors may need to include the convergence rate of FedProx for comparison, as the proposed algorithm is more similar to an enhanced version of FedProx rather than FedExP. Another suggestion is to compare the convergence rates of these algorithms in the partial participation setting, since as shown in Table 5, the proposed algorithm still performs well in this scenario.\
In line 220, Remark 2, the authors claim that data heterogeneity is successfully managed. However, as mentioned in the weaknesses section, Assumptions 2 and 3 ensure no data heterogeneity. Therefore, this claim may be incorrect.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have addressed the main limitation of this paper in Section 5.1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedbacks on our paper. Here is a detailed response to the weaknesses and questions the reviewer mentioned.
- Weakness 1: `First, the benefits gained from the extrapolation step are not evident. As shown in Table 2 of this manuscript, the only noticeable difference in the convergence rate compared to the existing FedExP appears to be a difference in the constant factor. Given that this paper primarily focuses on theoretical contributions, this result is not sufficiently strong.`
We respectfully disagree with the reviewer. While Table 2 shows that the worst-case convergence of FedExP and FedExProx differs only by a constant factor, key differences remain. FedExP uses an adaptive step size based on gradient diversity, whereas FedExProx uses a constant step size. The adaptive version of FedExProx offers a better worst-case convergence guarantee, as it depends on $L_{\gamma}(1+\gamma L_{\max})$, which can be up to $n$ times better than $L_{\max}$ according to Lemma 7. Thus, our method has a stronger theoretical guarantee. Additionally, no convergence guarantee was provided for FedExP with a constant extrapolation parameter in the original paper and it was not clear whether or not a constant extrapolation parameter would be effective or not, so a theoretical comparison with FedExProx under these conditions was not possible. In general, due to the differences in the adaptive rules for selecting the extrapolation parameter, it is challenging to compare the convergence directly. However, our experiments in Figure 4 confirm that the iteration complexity of our method, with a properly chosen step size $\gamma$ and a constant extrapolation $\alpha$, outperforms FedExP with an adaptive extrapolation parameter.
---
- Weakness 2: `Second, the assumptions appear to be too strong. Assumptions 2 (interpolation regime) and 3 (convexity) imply that all functions share the same optimal point. This is a very strong assumption, as it essentially indicates that there is no data heterogeneity, unlike existing works such as FedExP and FedProx, which allow for bounded data heterogeneity.`
Indeed, the interpolation regime is a strong assumption. However, in deep learning scenarios, we are often in the overparameterized regime, which is even stronger. As noted in Appendix F.3, without assuming the interpolation regime, the method converges to a neighborhood and the optimal extrapolation constant is reduced by a factor of $2$. This paper focuses on the interpolation regime, as we are inspired by the extrapolation used in the projection methods of convex feasibility problems and the similarity between proximal operations and projections. We also provide discussions of our proposed algorithm in the non-smooth setting, non-convex setting and strongly convex setting in Appendix F.
---
- Question 1: `In Table 2, the authors may need to include the convergence rate of FedProx for comparison, as the proposed algorithm is more similar to an enhanced version of FedProx rather than FedExP. Another suggestion is to compare the convergence rates of these algorithms in the partial participation setting, since as shown in Table 5, the proposed algorithm still performs well in this scenario.`
Thanks for the suggestion. We will add a comparison of FedProx and FedExProx and those algorithms in the partial participation setting in the next version of the paper.
---
- Question 2: `In line 220, Remark 2, the authors claim that data heterogeneity is successfully managed. However, as mentioned in the weaknesses section, Assumptions 2 and 3 ensure no data heterogeneity. Therefore, this claim may be incorrect.`
Thanks for pointing this out, this is indeed a typo, and we will delete the last sentence in Remark 2 in the next version of the paper.
---
Rebuttal 2:
Title: Did our response address your concerns?
Comment: Dear Reviewer xABN,
Thanks again for your review. Did our rebuttal address your concerns? Please note that the other two reviewers increased their scores from 5 to 6 after rebuttal. Our average score is 5.33 now, which may still be considered borderline (the official email from NeurIPS said that scores between 4.5-5.5 are borderline). Your score may therefore have a large influence on the fate of our work.
Please let us know whether the concerns were addressed. If that is the case, we would of course be happy if this could be reflected in the score. If not, please let us know what remains to be explained -- today is the last day we can do so.
Thanks again!
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for addressing my concerns. Regarding the replies about the first weakness, I do not fully agree with your explanation. The authors claim that this paper uses a constant step size compared to existing works such as FedExp. However, to the best of my knowledge, in cases where there is no gradient noise and no data heterogeneity, FedAvg can also use a constant learning rate. Moreover, in FedExp, the global step size \(\eta\) is required to satisfy \(\eta \le \frac{1}{\tau L}\), which still constitutes a constant step size, doesn't it?
Based on the responses provided, I will raise my score to 5.
---
Reply to Comment 2.1.1:
Title: Thanks!!
Comment: We will respond to this question shortly.
authors
---
Reply to Comment 2.1.2:
Comment: Thank you for your response.
Regarding your concern, FedAvg indeed uses a constant learning rate $\eta_l$ for solving each local optimization problem. However, it does not employ extrapolation ($\eta_g = 1$). In its extrapolated version, FedExP, the constraint $\eta_l \leq \frac{1}{6\tau L}$ applies to the local stepsize $\eta_l$. The extrapolation parameter $\eta_g$, on the other hand, is determined adaptively in each round based on gradient diversity. The original paper does not provide a theory for using a constant $\eta_g$ in FedExP.
Hope this clarifies your concern. We appreciate your time and efforts on reviewing our paper. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and effort.
The reviewers highlighted several key strengths of our paper. Notably, we introduced an extrapolation parameter to the FedProx algorithm for the first time, developed adaptive versions that eliminate dependence on the unknown smoothness parameter, extended the algorithm to handle partial participation, and provided clear and robust proofs for our claims.
The reviewers also had questions and concerns, which we addressed in the individual responses. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Gloss-free Sign Language Translation by Reducing Representation Density | Accept (poster) | Summary: This work uses contrastive learning of sign language gestures to improve the discrimination power of learned representations of gestures. This is motivated by showing, via TSNE projections, that gloss-free representations (that do not benefit from an intermediate representation), are less well represented that gloss-based representations. After incorporating contrastive learning, either into the pre-training, fine-tuning, or both, performance improvements can be achieved for both sign-language recognition and for sign-language translation over the presented baseline.
Strengths: The paper is tackling an important problem as accurate sign-language recognition and translation will enable accessibility for the hard of hearing and those that need to interact with the hard of hearing.
The paper is well motivated from the perspective of demonstrating that the particular measure used to determine representation quality (SDR) is significantly lower for gloss-free representations, which is the focus of interest here. However, the approach itself will have limited impact in the broader NeurIPS community. The paper is using known methods applied to the representation of sign language.
The work is based on open data and the authors will release their code and model with the paper. This will ensure reproducibility.
Weaknesses: The word “significance” has specific meaning, and you should not claim that results are significant without properly showing that they are so.
Results for only a single run are provided. This is acknowledged in the check sheet with the justification that increasing the number of runs is computationally expensive. However, the resources required are ~12 hours on 8x NVIDIA A100s, which does not seem excessive by current standards. One solution might be to just use the best values for the hyper parameters, and show these are representative across runs.
Some of the findings appear obvious -- of course representations of dissimilar gestures that project closely in embedding space will be difficult for a model to tease apart for downstream tasks.
There is a lot of emphasis and conclusions drawn from the visualizations, which are TSNE projections of the embeddings. TSNE visualizations should be considered with a degree of caution, and there are better solutions if you are trying to consider the global structure of the data.
Technical Quality: 3
Clarity: 3
Questions for Authors: Figure 2 — is the reported SDR just for the highlighted signs in the figure or is it computed across all sign gestures?
Is the SDR issue artificially inflated because of the dataset used? For example, using a corpus of signing from weather forecasting with a limited vocabulary and many signs around a small number of topics?
A missing limitation of the work is that a fixed margin is used to define the threshold for frame selection for the contrastive learning. Whilst 20 frames might be sufficient, as well as being impacted by closely repeating signs, could frame-selection also be impacted by signing rate?
My understanding is that two annotators select a frame representative for each gesture, and the main text said that details are in the appendix. However, I did not see how you select the final frame from the two (potentially different) annotations? Do you average the time stamps to pick the one on the middle?
Some other observations:
- Remove subjective qualifiers from your writing.
- Your equations should be punctuated and flow with the text.
- For Figure 2, I would highlight gloss-based vs. gloss-free approaches as at first glance approaches like SMKD look significantly better, and then later realized that this is expected since it is not a gloss-free approach.
- For Figure 3, the binning needs to be explained in the caption. Also is the axis label for the vertical axis correct? Is this figure not showing sign recognition accuracy and sign translation accuracy?
- It seems somewhat redundant including both Figure 1 and Figure 5.
- I think it would be worth reinforcing in Section 2.1 when you talk about mixing the training and testing tests that the purpose of this is only to derive labels, not for the model itself.
- Has the contrastive learning been used with the other baselines? It seems that since the performance there is less good, there might be more room for improvement?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes the limitations are addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed comments and insightful suggestions. We provide point-wise responses to your concerns below.
**W1: Lack of proper statistical validation**
Thank you for your suggestion. We reran the representative experiments of Table 2 five times, using different seeds (23,77,256,1024,2056). The table below shows the average results (with a standard deviation). The results demonstrate the statistical significance of our experiments.
| **Model** | **R@L ↑** | **B@1 ↑*| **B@2 ↑** | **B@3 ↑** | **B@4 ↑** |
|---|----|--|----|----|----|
| GFSLT-VLP | 38.61 (±1.37) | 35.98 (±1.00) | 23.52 (±0.88) | 15.80 (±0.79) | 11.21 (±0.54) |
| + SignCL into Finetuning | 48.54 (±1.14) | 46.41 (±0.79) | 32.65 (±0.51) | 22.68 (±0.42) | 16.05 (±0.16) |
**W2: Limited contributions**
We respectfully disagree with this. We provide our explanation below:
* We are not merely "using known contrastive learning methods applied to the representation of sign language." The key contribution of this paper is identifying the representation density problem for gloss-free SLT for the first time.
* This discovery is neither obvious nor trivial. It is well-known that gloss-free methods in SLT lag significantly behind gloss-based approaches, but the reasons are still under investigation. We highlight that the representation density problem could be a bottleneck in restricting the performance of gloss-free SLT.
* The insights provided in this paper for SLT have been recognized by Reviewers rTJg and UuUf, even though they expressed concerns about the sensitivity analysis on sampling parameters.
* The simple but effective SignCL is the second contribution. Even though it is a straightforward solution to the representation density problem, we are the first to validate this and improve performance in two different frameworks by 39% and 46%, respectively. It will provide a direction for future gloss-free SLT. Reviewer rTJg has noted that this approach is "relatively straightforward in a good way."
**W3: Some findings appear obvious and are heavily reliant on t-SNE visualizations**
We respectfully disagree with these statements. Here is our explanation:
* Our findings are not obvious and limited: While it is evident that poor representation hinders a model's ability, whether gloss-free SLT faces a significant representation density problem needs to be researched.
* Our emphasis and conclusions are not based on visualizations alone. We first used SDR to quantify the discriminative capability of various existing sign feature extraction techniques. Our conclusions are derived from these quantitative metrics and results, particularly when comparing gloss-free and gloss-based approaches. Visualizations only serve as a visual aid to better illustrate the representation density problem.
**Q1: Is the reported SDR just for the highlighted signs in the figure or is it computed across all sign gestures?**
All reported SDRs in Figure 2 are computed across all sign gestures in the training set.
**Q2: Is the SDR issue artificially inflated due to the dataset used?**
We believe the SDR issue is not artificially inflated. Our related conclusions have been consistently validated on the CLS-Daily Dataset, which is a multi-topic dataset. Moreover, many of our conclusions are derived from comparisons between gloss-free and gloss-based methods on the same dataset.
**Q3: Could frame selection be impacted by the signing rate given the fixed margin?**
Thank you for your insightful question. We address this concern from the following aspects:
* The margin is not fixed; it dynamically adapts based on the estimated average frames of each gloss. Therefore, it actually adapts to the signing rate.
* Our sensitivity analysis shows that SignCL is not sensitive to the margin parameter, as evidenced by a variance of 0.062 when using different thresholds in [0, 10, 20, 30, 40, 50].
**Q4: How to select the final frame from the two (potentially different) annotations?**
We apologize for the missing details. By default, we select the middle frame as the representative, and the two annotators first check if this default frame is a good representative. For interval [$l_v$,$r_v$], we ensure the current frame clearly represents a gesture and is different from the previously selected one. If the default frame does not meet these criteria, the annotators review the frames within the interval [$l_v$,$r_v$] proposed by the sign-gloss aligner. If a suitable frame cannot be found within this interval, the entire video is marked for discard. Only data that both annotators consistently agree upon as good will be used.
**Q5: Is the axis label for the vertical axis correct?**
We apologize for the confusion. To clarify:
In Figure 3(a), the bar chart represents the SDR of different gloss bins (left axis), while the line represents the corresponding sign recognition accuracy (right axis).
In Figure 3(b), the left axis indicates the translation performance B@4.
**Q6: Could there be more room for improvement?**
* We agree that achieving performance in gloss-free SLT similar to gloss-based approaches still has a long way to go. However, gloss-free methods have the advantage of not requiring costly gloss annotations, making it easier to scale up training datasets and perform large-scale pretraining. Then, the pretrained model can serve as a strong starting point for us to finetune on a small amount of high-quality annotated data.
* We believe that addressing the representation density issue is crucial for effective pretraining. SignCL can serve as an effective optimization objective, encouraging the learning of more discriminative sign gesture representations.
**Other suggestions:**
We appreciate your insightful suggestions. It will certainly help improve our paper. We will incorporate your feedback to refine our paper in the upcoming version.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the responses to the questions and concerns raised. I also appreciate you taking the effort to provide the statistics over multiple runs. I will increase my score by a point.
I would make two observations about your response.
1. You used the word "merely" and then put quotation marks around words that did not appear in my review. If you are using quotation marks, then you are attributing those words to someone else. The only place those words are used is in your response here and the general response above.
2. I did not say that you rely on visualizations "alone", but rather that there is a lot of emphasis on TSNE projections. TSNE projections are known to be problematic. There are alternative projections, such as UMAP, which better preserve the global structure of the data.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for your careful suggestions. We have learned a lot, and we will be more rigorous in using quotation marks and the words we use. We also appreciate your clarification. We will include UMAP visualizations in future versions. | Summary: This paper identifies a critical issue of representation density in gloss-free sign language translation. A series of models were employed to verify the existence of this problem. To address this, the author proposed a straightforward and effective solution, SignCL loss. This objective improves the discriminative representations of input videos, achieving promising performance in PhoenixT and CSL-Daily datasets.
Strengths: * The paper is straightforward and comprehensible, and the proposed method is somewhat effective.
* The adequate visualizations validate the effectiveness of the proposed objective.
* The extensive experiments on both public benchmarks show plausible performance.
Weaknesses: * The technical novelty and contribution are somewhat limited. The entire paper mainly introduces a contrastive loss (SignCL), which has been widely utilized in other various domains.
* The proposed objective is only applied in the single model, which seems to lack persuasiveness. Experiments on more SLT models are essential to validate its effectiveness.
* On line 118, why is there a need to optimize the test set? Why must we manually determine the best frame from $l_{v}$ and $r_{v}$ instead of utilizing CTC's gradient? For VLP, how are $l_{v}$ and $r_{v}$ generated?
* In section 2.2, why are VLP, I3D, and SMKD methods grouped together? These methods should differ in terms of their training objectives. SDR of SMKD will naturally be low, because CTC loss is a discriminative loss. Achieving low SDR is essential for successful completion of the CSLR task.
Technical Quality: 2
Clarity: 2
Questions for Authors: * The writing in the article is overly colloquial, and the presentation of Formula 4 may be confusing. Additionally, there is an extraneous character ")" in the second line of the title in Figure 4.
* Line 236 claims that GFSLT "incorporates CLIP and MBART for model pretraining and finetuning." However, to my knowledge, GFSLT does not utilize CLIP's pre-trained weights.
* The performance of CSL-Daily in Table 2 is notably inconsistent. In Table 1, when Sign2GPT achieves R@L of 48.90, its B@4 is 22.52. However, in SignCL's CSL-Daily, despite achieving a similar R@L of 48.92, its B@4 drops to 16.16. Please verify the accuracy of these performance metrics.
* The t-SNE visualization method is unclear. How should the SLT task be correctly labeled with gloss?
* The datasets PHOENIX-2014T and CSL-Daily are relatively small in size. Can the effectiveness of the method be validated on larger datasets like How2Sign and Open-ASL?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Yes, the limitations has been discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed comments and suggestions. We provide point-wise responses to your concerns below.
**W1: The technical novelty and contribution are somewhat limited:**
We respectfully disagree with the statement that "The entire paper mainly introduces a contrastive loss". We emphasize our contributions below:
* Introducing a contrastive loss is not the only contribution of this work. We have an entire Section 2 and extensive analyses dedicated to investigating the representation density problem in SLT.
* The key contribution of this paper is identifying the representation density problem for gloss-free SLT for the first time. It highlights a potential bottleneck in restricting the performance of gloss-free SLT, which advances the SLT field.
* Introducing SignCL as the second contribution. Even though it is based on a well-known contrastive learning method, it is a simple but effective solution to address the representation density problem. We are the first to validate this, providing a direction for future gloss-free SLT to learn more discriminative representations.
**W2: The proposed objective is only applied in the single model:**
We respectfully disagree with this conclusion. As described in line 13 and experiments listed in Section 4.1, we evaluate SignCL across multiple datasets and SLT frameworks, including both gloss-based and gloss-free settings. Reviewers rTJg and UuUf have recognized this contribution and even considered it as a strength.
**Q1: how are $l_v$ and $r_v$ generated?**We noticed that you have several questions about the sign-gloss forced aligner. We provide a clearer introduction to address your concerns.
* To start, we want to emphasize that the purpose of the sign-gloss forced aligner is only to derive labels to measure the discriminability of the feature representations, not for the model itself. So for VLP, it does not need to produce $l_v$ and $r_v$; the entire Sign-Gloss Alignment consistently uses pre-trained models.
* We mix the test set into the sign-gloss forced aligner training process and employ volunteers to manually determine the best frame to ensure accurate SDR evaluation when assessing different SLT approaches.
* The generation of $l_v$ and $r_v$ has been covered in several previous works [a, b]. For specific details, you may refer to these studies. We highlight that, as shown in Appendix A.3, the sign-gloss forced aligner yields a Word Error Rate (WER) of 8.68, which is on par with human performance [c]. Even with this accuracy, we strive to ensure the most precise results on the test set.
**Q2: Why are VLP, I3D, and SMKD methods grouped together?**
* Section 2 aims to investigate the representation density problem within existing sign feature extraction methods, including both gloss-free and gloss-based methods.
* We acknowledge your insightful observation that the SDR of gloss-based methods, such as SMKD, will naturally be lower because the CTC loss is a discriminative loss. We grouped them together to highlight that gloss-free SLT suffers from worse representation density compared to gloss-based methods.
**Q3: Please verify the accuracy of these performance metrics in table 1 and table 2.**
* We have re-calculated our metrics, and they are accurate. We believe the differences in R@L and B@4 metrics are due to the different aspects they measure. R@L focuses on the retrieval accuracy, while B@4 emphasizes the fluency and relevance of the generated translations.
* Additionally, the characteristics of Chinese and German languages are quite different, which can impact the consistency of R@L and B@4 across different languages.
* It is also worth noting that similar discrepancies have been observed in past methods. For example, Sign2GPT achieves an R@L of 42.36 and a B@4 of 15.40, whereas GFSLT-VLP achieves a similar R@L of 42.49 but a B@4 of 21.44 on PHOENIX-2014T.
**Q4: Can the effectiveness of the method be validated on larger datasets like How2Sign and Open-ASL**
* We believe that SignCL can still be effective on larger datasets, as larger datasets often lack costly gloss annotations to construct a discriminative loss. SignCL can serve as an effective optimization objective in this process, encouraging the learning of more discriminative representations of sign gestures.
* We train on the How2Sign dataset for 40 epochs from scratch. In this limited experiment, the GFSLT baseline performance was 0.79 B@4, while applying SignCL as an additional optimization objective improved the performance to 1.58 B@4. These experiments are comparable to the results from [d], where they also trained from scratch.
|| | | | |
|--|--|--|--|--|
| **Methods** | **B@1** | **B@2** | **B@3** | **B@4** |
| H2S(no pretraining)[d]| 13.92 | 4.69 | 1.82| 0.86|
| H2S(pretraining)[d]| 14.96 | 5.11 | 2.26 | 1.22|
| GFSLT| 12.31 | 3.85 | 1.53 | 0.79|
| + SignCL into Finetuning | **17.36** | **6.27** |**2.87**|**1.58**|
| | | | | |
**Suggestions And Typos:**
* We apologize for the lack of clarity in the text and the typographical errors. We have revised our paper according to your suggestions to improve its readability and accuracy.
* To clarify, Line 236 states that GFSLT-VLP "incorporates CLIP and MBART for model pretraining and finetuning." What we mean is that GFSLT-VLP uses the pretraining strategies of CLIP and MBART to retrain the GFSLT model, not talking about using their pre-trained weights. We will make this point clearer in future versions of the paper.
References:
[a] CTC-segmentation of large corpora for German end-to-end speech recognition.
[b] Cross-modality Data Augmentation for End-to-End Sign Language Translation
[c] Achieving human parity in conversational speech recognition Microsoft Research Technical Report 2017
[d] YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus
---
Rebuttal 2:
Comment: Firstly, I would like to thank the authors for their detailed responses and the additional experiments. After reading the rebuttal, I still have the following concerns regarding the paper:
1. The authors claim that the main contribution of the paper is identifying the representation density problem for gloss-free SLT for the first time. However, I believe the fundamental reason why gloss-free SLT lags behind gloss-based SLT in performance is that the Vision Encoder lacks effective supervision signals, leading to insufficient representation capacity of the visual features. The representation density problem is merely one manifestation of the weak representational capacity of the visual encoder. In general vision tasks, numerous studies have been conducted on improving the representational capacity of visual encoding, such as Moco, Dino, MAE, etc. Therefore, striving to enhance the discriminative power of visual representations is a direction that has been continuously explored by the academic community. Thus, I believe that positioning this discovery as a core contribution lacks novelty.
2. Regarding the representation density problem in the visual encoder, enhancing the representational capacity of the visual encoder is key to addressing this issue. The contrastive loss proposed in this paper does indeed improve the representational capacity of the visual model to some extent. However, from the quantitative experimental results, the improvement effects are inconsistent across different datasets. Compared to PhoenixT and CSL-Daily, H2S is a very challenging dataset due to its larger training set and vocabulary. It can be observed that the improvement brought by the proposed method on this dataset is very limited (B@4 0.79->1.58). In contrast, GLoFE [1] demonstrated that directly using the CNN model trained in [2] could achieve B@4 2.24. Therefore, these results do not provide strong evidence that the contrastive loss proposed in this paper can effectively solve the representation density problem. In other words, using a pre-trained visual model with stronger representational capacity might well solve this problem.
[1] Lin, Kezhou, et al. "Gloss-free end-to-end sign language translation." In ACL 2023.
[2] Oscar Koller, et al. "Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos." In TPAMI 2019.
---
Rebuttal 3:
Title: Addressing Misalignment in Perspectives on the Position of This Paper
Comment: Thank you for your thoughtful review and for highlighting key points. I believe **we are aligned in recognizing the gloss-free SLT struggles from the weak representative capacity** of the Vision Encoder due to insufficient supervision signals. The difference may lie in how we approach the problem, particularly in terms of perspective and granularity.
In response to your concerns, we would like to clarify our position:
>* we find that describing this problem as "insufficient representation capacity" is too broad and general. In our work, **we formalize this problem specifically as the lack of discriminative power in the visual representations of sign gestures**, which we term the "representation density problem."
>* While general vision tasks have made strides in improving visual representations, we argue that the ability to distinguish between sign gestures with different meanings is crucial in the context of sign language, and prior to our work, this has not been highlighted.
>* We demonstrate that the aspect of representation density significantly impacts sign language translation performance, by comprehensive investigation in Section 2.
>* SignCL is relatively straightforward (in a good way) in addressing the lack of discriminative power for different sign gestures. It represents a good start, and of course, there is much more that can be explored in the future.
Regarding the performance in H2S, we would like to argue two points:
>* Our experiments were limited due to constraints in computational resources and time. We only trained for 40 epochs, whereas GLoFE [1] was trained for 400 epochs.
>* The CNN model used in GLoFE [1] was pre-trained with GLOSS annotations to extract visual features [2]. The SignCL approach does not use any gloss annotations, whether for pretraining or as weak supervision.
>We believe that **directly comparing our results with GLoFE is unfair**. We will continue training the model, but we estimate that similar training would take around 25 days on an A800*8 GPU setup due to GFSLT processing raw video inputs directly.
So far, our results demonstrate that incorporating SignCL does improve GFSLT performance (B@4 0.79 → 1.58) on How2Sign.
Thank you again for your thoughtful review and detailed responses. We welcome further discussion.
---
Rebuttal Comment 3.1:
Title: Kindly Follow-up on the Discussion
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. We have provided a new response to address the concerns you further raised. Thank you for your thoughtful review and for highlighting key points. We kindly inquire if there is any further information or clarification you might need from our side. We apologize if this follow-up causes any inconvenience. Hope you have a great day!
---
Rebuttal Comment 3.2:
Comment: Thanks for your detailed responses and further discussion. After carefully reviewing the rebuttal and the main paper, I still consider that presenting the discovery of the 'representation density problem' as a core contribution lacks innovation. In the field of sign language video understanding, improving the representational capacity of visual models is a common goal among researchers. Furthermore, the proposed contrastive loss function has not consistently improved performance across public datasets (the performance improvements on PhoenixT and H2S are significantly lower than on CSL_Daily), making it difficult to assess the generalization ability of SignCL. I suggest the authors may need to conduct more in-depth research on this phenomenon to enhance the persuasiveness of the proposed method. Therefore, I maintain my initial score.
---
Rebuttal 4:
Comment: Thank you for your thoughtful feedback.
Based on the new concerns you’ve raised, we would like to share some additional arguments.
We would like to mention Sign2GPT[a], which involves fine-tuning large-scale pretrained vision and language models (Dino-V2 and XGLM) for SLT.
* According to the ablation study results presented in Table 2 of their paper, directly fine-tuning these models pretrained on general domains (termed as Sign2GPT) yields results that are only competitive with GFSLT-VLP (B@4 19.42 vs. 21.44 on PHOENIX-2014T).
* To improve performance, they developed a pseudo-gloss pretraining strategy (termed as Sign2GPT(w/PGP)) on Dino-V2.
In contrast, our approach, using a simple contrastive loss, achieves better improvements over GFSLT-VLP when compared with Sign2GPT(w/PGP) on both the PHOENIX-2014T and CSL-Daily datasets.
---
Regarding the performance of the proposed SignCL, we believe it should be compared to more fully gloss-free methods, such as Sign2GPT [a], rather than GLoFE [1] which uses a GLOSS Dataset pretrain viusal backbone$^*$.
We have provided consistent benchmarks with a competitive baseline with [a] and [b]. The improvements we observe on the two most widely used datasets in the sign language field (PHOENIX-2014T and CSL-Daily) are generally consistent with [a] and [b].
To sum up, though numerous studies in general vision tasks have explored improving the representational capacity of visual encoders, this does not diminish the novelty of our contribution to SLT. **Our SignCL outperforms those achieved using general vision pre-trained models, such as Dino-V2 in Sign2GPT[a].**
While we indeed did not have sufficient resources to run How2Sign for 400 epochs (which would take around 25 days on an A800*8 GPU setup), we believe that the current results are sufficient to validate our approach. **It's fairer to compare with Sign2GPT[a] and GFSLT[b] due to the consistent setting onthe two most widely used benchmarks (PHOENIX-2014T and CSL-Daily)**.
---
[a] "Sign2GPT: Leveraging Large Language Models for Gloss-Free Sign Language Translation" in ICLR'24
[b] "Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining" in ICCV'23
\* As referenced in Section 3.1 of GLoFE [1]: "The backbone is pre-trained on the WLASL dataset (Li et al., 2020a) through the isolated sign language recognition task."
---
Rebuttal Comment 4.1:
Comment: Thank you for your thoughtful responses.
* For Sign2GPT [1], although this method utilizes large-scale pre-trained vision and language models (Dino-V2 and XGLM), both modules are frozen during training (except for the few learnable parameters with the Lora fine-tuning). Under such setting, Sign2GPT could achieve comparable performance with other methods by training all parameters of their proposed model. This result further validates my assumption, that **enhancing the representative capacity of vision encoders is the key to improving the gloss-free SLT or gloss-based SLT.** Meanwhile, I also acknowledge that the proposed SignCL can improve the representational capacity of visual models to some extent. However, I am confused by the varying degrees of performance improvement that SignCL achieves across different benchmarks.
[1] "Sign2GPT: Leveraging Large Language Models for Gloss-Free Sign Language Translation" in ICLR'24
---
Rebuttal 5:
Comment: Thank you for your insightful feedback and for acknowledging the effectiveness of SignCL in enhancing the representational capacity of visual models. We do not see the assumption you believe as a point of contention regarding the contributions of our work.
In fact, we have specifically formalized this poor visual encoder as the lack of discriminative power in the visual representations of sign gestures. **Our work systematically experiments to validate this issue (Section 2), and our results clearly demonstrate that improving the discriminability of visual representations indeed leads to better translation performance (Section 3).**
Regarding the inconsistency in the improvement of SignCL on PHOENIX-2014T and CSL-Daily, this issue aligns with what we addressed in Q3. **The characteristics of Chinese and German languages are quite different**, making direct comparisons across datasets challenging. Sign2GPT vs. GFSLT-VLP also exhibits similar varying degrees of performance improvement.
We understand there may be concerns about our performance improvement on CSL-Daily. We have committed to release our code, models, and logs to facilitate the reproduction of our results. | Summary: This paper addresses the challenge of gloss-free sign language translation (SLT) by identifying and tackling the "representation density problem". The authors observe that visual representations of semantically distinct sign gestures tend to be closely packed in feature space, making it difficult for gloss-free methods to distinguish between different gestures. To address this, they propose a contrastive learning strategy called SignCL, which encourages models to learn more discriminative feature representations in a self-supervised manner. The paper demonstrates significant improvements in BLEU scores across various translation frameworks on the CSL-Daily dataset.
Strengths: 1 The paper identifies a novel problem (representation density) in gloss-free SLT and proposes an innovative solution (SignCL) to address it. The proposed SignCL method shows substantial improvements in SLT performance without requiring additional model parameters, potentially advancing the field of gloss-free SLT.
2 The authors conduct thorough experiments to demonstrate the existence of the representation density problem and the effectiveness of their proposed solution. They evaluate their method across multiple datasets and SLT frameworks.
3 The paper is well-structured and clearly written. The problem statement, methodology, and results are presented in a logical and easy-to-follow manner.
Weaknesses: 1 The sampling strategy for SignCL makes assumptions that may not always hold true in real-world scenarios. Specifically, it assumes that adjacent frames always belong to the same sign gesture and that frames far apart always represent different semantics. This approach overlooks the possibility of rapid transitions between gestures or the repetition of gestures over time, which could lead to incorrect positive or negative samples.
2 The SignCL method relies on several manually defined parameters and thresholds, such as the margin for determining positive and negative samples. The paper lacks a rigorous justification for these choices or an analysis of how sensitive the method is to these parameters. A more principled approach to parameter selection or a comprehensive sensitivity analysis would strengthen the scientific basis of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 How does the performance of SignCL vary with different choices of the margin parameter in the sampling strategy? Is there a systematic way to determine the optimal margin for a given dataset?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss their limitations in the Appendix。
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed comments and insightful suggestions. We provide point-wise responses to your concerns below.
**W1: Lack of comprehensive sensitivity analysis on sampling strategy**
Thank you for your insightful suggestions. We conducted an additional comprehensive sensitivity analysis on the sampling strategy.
**1. Before we proceed, let's briefly revisit some details from Formula 4**: margin = max(10, len(frames)/len(text)* 2.3).
* The margin for negative sampling **dynamically** depends on the estimated average margin of each gloss (i.e., len(frames)/len(text) * speech-to-gesture Zipf’s factor) and a minimum threshold (i.e., 10). The Zipf’s factor, set as 2.3, refers to the speech-to-gesture Zipf’s Law.
* We calculated the distribution of the dynamically estimated margin, with the results shown in the table below. A more detailed distribution can be seen in the [attached PDF](https://openreview.net/attachment?id=LU1t0zCyyb&name=pdf) (green background).
| **Margin**|[0, 10)|[10, 20)|[20, 30)|[30, 40)|[40, 50)|[50, ∞)|
|----|:--:|:--:|:--:|:--:|:--:|:--:|
| **Count** | 113 | 3460 | 3264 | 230 | 23 | 6 |
* Only 1.6% fall into the [0, 10) range, which means that the margin is primarily determined by the estimated average frames of each gloss in our paper (i.e., len(frames)/len(text) * 2.3).
**2. Sensitivity analysis on the threshold:** We designed a minimum threshold to ensure the margin is not too small.
* Experiment setup: To make the analysis more principled, we evaluated the threshold values at [0, 10, 20, 30, 40, 50]. Note that threshold=0 and threshold=50 indicate that the margin is dominated by the dynamically estimated margin and the fixed threshold, respectively.
* Experiment results: We uniformly trained for 80 epochs on PHOENIX-2014T due to resource limitations. The results in the table below indicate that SignCL is not sensitive to the threshold parameter, with a variance of 0.062.
| | | | | | ||
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| **Threshold** | 0 | 10 | 20 | 30 | 40 | 50 |
| **B@4** | 17.24| 17.63| 17.55| 17.63| 17.13| 17.11|
**3. Sensitivity analysis on dynamically estimated margin:** We use text length and Zipf’s factor to estimate the average frames of each gloss (gloss-free setting).
* Experiment setup: To make the analysis more principled, we first use gloss labels to calculate the ground truth margin distribution for PHOENIX-2014T (green in Figure 8), with a specific Zipf’s factor of 2.1. We then evaluated the threshold values at [1, GT, 2.3, 3, 4]. Note that Zipf’s factor = 1 means we use len(frames)/len(text) directly to estimate the margin, while GT represents using the ideal len(frames)/len(gloss) to determine the margin (Zipf’s factor = 2.1).
* Experiment results: The results show that using Zipf’s factors between 1, 2.3 and the GT margin does not lead to significant differences. When Zipf’s factor is set to 4, there is a noticeable drop in performance due to the margin being too large, which reduces the negative sampling interval (high probability of sampling from the same negatives).
| | | | | | |
|---|:---:|:---:|:---:|:---:|:---:|
| **Zipf’s factor** | 1 | GT | 2.3 | 3 | 4 |
| **B@4** | 17.45| 17.89| 17.63| 17.29| 16.26|
----
**Q1: How to systematically determine the optimal margin?**
Thank you for your insightful question. Based on our comprehensive sensitivity analysis, we found that SignCL is not sensitive to the threshold and Zipf’s factor. Therefore, we suggest setting the optimal margin approximately equal to the mean ± standard deviation of the estimated average frames of each gloss based on a given dataset, e.g., len(frames)/len(text) * 2. The threshold can be set to mean - standard deviation.
----
**W2: The sampling strategy could lead to incorrect positive or negative samples**
* We agree that the sampling strategy might indeed produce errors in certain special cases. However, we would like to emphasize that a range of contrastive learning frameworks demonstrate that contrastive learning can still perform well even when there is noise in sampling strategies[a], like SimCLR[b] and MoCo[c].
* This robustness is because the contrastive function inherently accommodates variability by focusing on relative differences rather than the absolute correctness of positive and negative pairs. Special cases of incorrect positive or negative samples do not significantly impact overall performance. Our sensitivity analysis also indicates that variations in the margin have minimal effect.
---
[a] Contrastive Representation Learning: A Framework and Review
[b] A Simple Framework for Contrastive Learning of Visual Representations
[c] Momentum Contrast for Unsupervised Visual Representation Learning | Summary: This paper focuses on gloss-free sign language translation (SLT) and is largely motivated by the large cost to annotating glosses. The authors discussed a so-called "representation density problem" in gloss-free SLT where semantically dissimilar signs appear in a similar part of the latent space.
The first technical contribution relates to a metric they introduce to quantify the representation problem, which uses Fisher’s Discriminant Ratio (FDR) to assess the difference between the inter- and intra-gloss features. To improve the separability, as their second contribution, they propose SignCL, a contrastive learning strategy that encourages the model to learn more discriminative feature representations in a self-supervised manner.
The results show significant improvements in BLEU scores on the CSL-Daily dataset compared to state-of-the-art methods.
Strengths: Intuitively the motivation for the paper makes sense, the methods are relatively straightforward (in a good way), and their solution seems to work well. The authors provide a good amount of detail (especially in the appendix) in case someone wanted to reproduce.
Weaknesses: A primary motivating example in the paper is that “RECIPROCATE” and “REVENGE” live in similar parts of the latent space because the hand motions are similar (even though the facial expressions are different). The proposed contrastive approach takes samples from different parts of the same sentence, where each sign should contrast with dissimilar signs in that sentence. Perhaps this is just an issue with the motivation, but it doesn't seem like the approach would help with the motivating example. Similarly, it doesn't seem like the contrastive approach should be any more helpful with gloss-free SLT vs gloss-based SLT.
I am wary about some of the t-SNE visualizations. The Sign Density Ratios don't change that much between approaches, and given that t-SNE is known to be deceptive at times, I wonder if these visualizations are representative or overstated.
Sec 4.3 on qualitative analysis is interesting but is too anecdotal. It would be useful to have a much deeper set of analyses here covering a much larger set of examples.
Overall, I think this paper is certainly not bad, but I'm on the fence about whether the novelty or depth is sufficient for this venue.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are two types of issues that are highlighted: (1) challenges where signs have the same hand motion but different facial expression and (2) challenges where the same hand shape is used but with different motions (e.g., piano vs typing). Can you talk specifically about how the proposed approaches help with face and motion errors?
I noticed some odd results in A.3.1. The WER results for train/test/dev are hugely different: 8.68/8.03/25.28. How is it the case that 'dev' has a 25% WER but train and test have a hugely improved 8%?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Their responses seem reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed comments and insightful suggestions. We provide point-wise responses to your concerns below.
**Q1: How do the proposed approaches help with face and motion errors?**
* Thank you for your question. Our SignCL approach samples positive and negative examples from a single video, which means that non-sign features such as the signer’s background and camera angle remain consistent. Consequently, when the contrastive learning framework distinguishes between positive and negative samples, the model naturally focuses on sign-specific features, such as the differences in facial expressions and hand movements, rather than on non-sign features like the signer or background.
* To sum up, although SignCL is not explicitly designed to address face and motion errors (due to they are not the primary focus of this paper), the strategy of sampling from the same video inherently directs the model's attention to more sign-specific features, such as subtle differences in facial expressions and motions.
* We are happy to see that future work will design face and motion-aware sign embedding backbones and apply SignCL to them. To the best of our knowledge, frame-based sign embedding models using CNN and ViT architectures are currently the best practice for sign language translation.
**Q2: Why are there odd results in A.3.1?**
* Sorry for the confusion. To briefly recap, the purpose of the gloss-sign aligner is only to derive labels to calculate SDR, not for the model itself. We want to ensure the SDR is as accurate as possible, so we mix the test set into the training set to train the gloss-sign aligner. The dev set is reserved for evaluating the training process of the gloss-sign aligner and is not used in the SDR calculation. Therefore, the results for the dev set may appear less optimal.
**W1: The proposal SignCL approach seems not aligned with the primary motivating example** :
We respectfully disagree. Here is our explanation:
* The examples in Figure 1 or Figure 5 are intended to aid in illustrating the representation density. We do not explicitly tackle these examples in our method. We apologize for any confusion, and we will emphasize these points more clearly in future versions.
* The key contribution of this paper is identifying the representation density problem for gloss-free SLT for the first time. We discuss the overall discriminability of feature representation of sign gestures, measured by the Sign Density Ratio (SDR).
* The proposed SignCL serves as a relative and straightforward attempt to improve the discriminability of representations in the gloss-free setting.
**W2: It doesn't seem like the contrastive approach should be any more helpful with gloss-free SLT vs gloss-based SLT**:
* Of course, SignCL can also work for gloss-based methods, and we have results in Section 4.1 and Appendix A3.3 that show this. However, the benefit may not be as large as that in the gloss-free setting because gloss-based methods use costly gloss annotations and CTC loss as a discriminative optimization objective.
* The motivation of SignCL is that gloss-free methods suffer from worse visual representation due to the lack of costly gloss annotations. SignCL is designed in a self-supervised manner and does not rely on any gloss information to fit the gloss-free setting.
**W3: t-SNE visualizations might be misleading**:
We understand the t-SNE visualizations can be deceptive at times. We want to emphasize that t-SNE is not the primary basis for our conclusions; it serves as a visual aid to better illustrate the representation density problem.
* The conclusions about representation density and performance drop are derived from the quantitative metrics and experiment results, especially when comparing gloss-free and gloss-based approaches.
* In Section 2, we comprehensively investigate the representation density problem by using the Sign Density Ratio to measure feature discriminability within existing sign feature extraction methods. We also use sign recognition and translation tasks to analyze how different densities of representations affect performance.
* We believe the representation density problem is not overstated. As shown by the experiment results in Figure 3 and Appendix A.3.3, gloss-free features indeed exhibit higher SDR and significantly poorer sign language recognition and translation performance. It highlights a potential bottleneck in restricting the performance of gloss-free SLT.
**W4: Qualitative analysis in Section 4.3 is anecdotal and needs more depth**:
* Thank you for your insight. We are open to adding more cases and analyses. Unfortunately, the field of sign language research currently lacks an appropriate benchmark that is annotated with a large number of gestures that are similar in motion but distinct in meaning.
* However, this paper focuses on the overall discriminability of feature representation of sign gestures. We would like to emphasize that, in addition to the visual analysis, we have provided overall sign recognition accuracy as quantitative evidence.
* In Section 2 and Appendix A.3.3, we examine sign recognition accuracy and translation performance when applying SignCL to GFSLT-VLP pretraining. Figure 3 and Tables 7/8 demonstrate that the representation has better discriminability (lower SDR) and higher performance in sign language recognition after applying SignCL to the GFSLT-VLP baseline. We will make this qualitative analysis a separate section and make it clearer in future versions.
---
Rebuttal 2:
Title: Kindly Follow-up on the Discussion
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. Our response has been available for several days now. With the discussion deadline approaching, we kindly inquire if there is any further information or clarification you might need from our side.
In our response:
> We have provided a clearer explanation of the position of this paper, where this paper focuses on the overall discriminability of feature extraction in sign language translation. We apologize for any confusion caused in the original paper, particularly regarding the cases in Figures 1 and 5.
>Additionally, we have included further margin distribution statistics and sensitivity analysis. The experiments demonstrate that the SignCL method is not sensitive to the margin parameter.
We would greatly appreciate your prompt response and thank you once again for your valuable comments and insightful suggestions. We apologize if this follow-up causes any inconvenience. Hope you have a great day! | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers for their detailed comments and insightful suggestions. We are encouraged that they find our paper identifies a **novel problem** (representation density) in gloss-free SLT [Reviewer rTJg and UuUf], and introduces a relatively straightforward SignCL to address this representation problem (**simple and effective method**) [Reviewer rTJg, UuUf, bAiC and rKGZ].
We would like to address some key concerns below:
1. We respectfully disagree that our contributions are limited.
> * **We are not merely "using known contrastive learning methods applied to the representation of sign language."** The key contribution of this paper is identifying the representation density problem for gloss-free SLT for the first time.
> * **This discovery is neither obvious nor trivial.** It is well-known that gloss-free methods in SLT lag significantly behind gloss-based approaches, but the reasons are still under investigation. We are the first to take a closer look at the representation of sign gestures and demonstrate that gloss-free methods suffer from worse representation density. We highlight that the representation density problem could be a bottleneck in restricting the performance of gloss-free SLT, providing a direction for future gloss-free SLT to learn more discriminative representations.
> * Even though contrastive loss has been widely utilized in other domains, we believe SignCL is a very straightforward and effective solution to the representation density problem in the gloss-free setting. Our experiments show that it can improve performance in two different frameworks by 39% and 46%, respectively. This also highlights that representation density is a critical issue in gloss-free SLT.
2. Findings and conclusions are not merely based on t-SNE visualizations.
> * In Section 2, we comprehensively investigate the representation density problem within existing sign feature extraction methods by using SDR to measure feature discriminability. We also use sign recognition and translation tasks to analyze how different densities of representations affect performance.
> * Our conclusions are derived from these quantitative metrics and experiment results, especially when comparing gloss-free and gloss-based approaches. The visualizations are used to aid in illustrating the findings about the representation density.
3. Another main concern is the lack of analysis and supplement for the straightforward sampling strategy, which may lead to incorrect positive or negative samples in some certain special cases. We provide point-wise responses to this concern below:
> * There is a misunderstanding that the margin in SignCL is fixed or heavily influenced by the threshold. In fact, the margin is dynamically evaluated based on the text length and a validated speech-to-gesture Zipf’s factor (i.e., len(frames)/len(text) * 2.3). The distribution of margins during training is shown in the [global PDF](https://openreview.net/attachment?id=LU1t0zCyyb&name=pdf).
> * We want to emphasize that the contrastive function inherently accommodates variability by focusing on relative differences rather than the absolute correctness of positive and negative pairs. Therefore, partially incorrect positive or negative samples do not affect performance. This has been verified in our supplemental sensitivity analysis.
> * What's more, a range of related works, like SimCLR and MoCo, demonstrate that contrastive learning can still perform well even when there is noise in sampling strategies.
Pdf: /pdf/df3782a890823909dee55d453384af24af86a970.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The main content of this paper is about improving the performance of Gloss-free sign language translation. The author discovered the representation density problem in sign language translation, that is, in the feature space, the semantic visual representations of different sign language gestures tend to be closely clustered together, making it difficult for the gloss-free method to distinguish different gestures, thus significantly affecting the translation performance. To solve this problem, the paper proposes a simple and effective contrastive learning strategy SignCL, which encourages term-free models to learn more discriminative feature representations in a self-supervised manner. Experimental results show that SignCL is able to significantly reduce representation density and improve performance in different translation frameworks.
Strengths: - The authors discovered the representation density problem for the first time in the field of sign language and conducted a detailed analysis to show that this problem does affect the performance of sign language translation. These findings will help advances in the field of sign language processing.
- Based on this finding, the authors proposed a contrastive learning method to improve the representation density problem. Experiments show that this method works well in the gloss-free setting.
- The author promises to open source the code and model
Weaknesses: - The core of the contrastive learning method in this paper is the selection of corresponding positive and negative samples. The analysis and supplement of strategy selection can make this work more perfect. For example, what is the impact of choosing different distance parameters in formula 4? In addition, the selection of negative examples does not seem to be limited to a single video. For example, randomly selecting other frames with the same signer may be a better choice.
- The contrastive learning method proposed by the author can be considered as a method to enhance the representation of sign language videos. This method should also have a certain effect on the gloss-base method, although the benefit may not be as large as that in the gloss-free setting. I would be happy to see the author add some relevant data, which can expand the scope of application of this paper's method.
Technical Quality: 3
Clarity: 3
Questions for Authors: see Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed comments and insightful suggestions. We provide point-wise responses to your concerns below.
**W1: The analysis and supplement of strategy selection can make this work more perfect**
Thank you for your suggestions. We have added a systematic and sensitivity analysis of the sampling strategy.
1. Before we proceed, let's briefly revisit some details from Formula 4: margin = max(10, len(frames)/len(text)* 2.3).
* The margin for negative sampling **dynamically** depends on the estimated average margin of each gloss (i.e., len(frames)/len(text) * speech-to-gesture Zipf’s factor) and a minimum threshold (i.e., 10). The Zipf’s factor, set as 2.3, refers to the speech-to-gesture Zipf’s Law.
* We calculated the distribution of the dynamically estimated margin, with the results shown in the table below. A more detailed distribution can be seen in the [attached PDF](https://openreview.net/attachment?id=LU1t0zCyyb&name=pdf).
| **Margin**|[0, 10)|[10, 20)|[20, 30)|[30, 40)|[40, 50)|[50, ∞)|
|----|:--:|:--:|:--:|:--:|:--:|:--:|
| **Count** | 113 | 3460 | 3264 | 230 | 23 | 6 |
* Only 1.6% fall into the [0, 10) range, which means that the margin is primarily determined by the estimated average frames of each gloss in our paper (i.e., len(frames)/len(text) * 2.3).
2. Sensitivity analysis on the threshold: We designed a minimum threshold to ensure the margin is not too small.
* Experiment setup: To make the analysis more principled, we evaluated the threshold values at [0, 10, 20, 30, 40, 50]. Note that threshold=0 and threshold=50 indicate that the margin is dominated by the estimated margin or the fixed threshold, respectively.
* Experiment results: We uniformly trained for 80 epochs on PHOENIX-2014T due to resource limitations. The results in the table below indicate that SignCL is not sensitive to the threshold parameter, with a variance of 0.062.
| | | | | | ||
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| **Threshold** | 0 | 10 | 20 | 30 | 40 | 50 |
| **B@4** | 17.24| 17.63| 17.55| 17.63| 17.13| 17.11|
3. Sensitivity analysis on dynamically estimated margin: We use text length and Zipf’s factor to estimate the average frames of each gloss (gloss-free setting).
* Experiment setup: To make the analysis more principled, we first use gloss labels to calculate the ground truth margin distribution for PHOENIX-2014T (green in Figure 8), with a specific Zipf’s factor of 2.1. We then evaluated the threshold values at [1, GT, 2.3, 3, 4]. Note that Zipf’s factor = 1 means we use len(frames)/len(text) directly to estimate the margin, while GT represents using the ideal len(frames)/len(gloss) to determine the margin (Zipf’s factor = 2.1).
* Experiment results: The results show that using Zipf’s factors between 1 and 4 does not lead to significant differences. When Zipf’s factor is set to 8, there is a noticeable drop in performance due to the margin being too large, which leads to a high probability of sampling from the same negatives.
| | | | | | ||
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| **Zipf’s factor** |1 | GT | 2.3 | 3| 4 | 8 |
| **B@4** |17.45|17.89|17.63|17.29| 17.10| 16.26 |
4. Conclusion for sensitivity analysis.
* We noticed that SignCL is not sensitive to the threshold and Zipf’s factor.
* We believe this insensitivity is because the contrastive function inherently accommodates variability by focusing on relative differences rather than the absolute correctness of positive and negative pairs. The size of the margin boundary does not significantly affect the overall performance of contrastive learning, as long as it is not too large or too small.
* To systematically determine the optimal margin, we suggest setting it approximately equal to the mean ± standard deviation of the estimated average frames of each gloss based on a given dataset, e.g., len(frames)/len(text)* 2. The threshold can be mean - standard deviation.
**Q1: Why not randomly select from other videos with the same signer?**
* This is an insightful question that explores alternative sampling methods. We did consider this approach. However, sampling from other videos with the same signer would require additional labels to identify the signer, which limits the applicability of SignCL. We believe maintaining a gloss-free setting is important as it represents a significant trend in the field.
* As a supplementary experiment, we attempted to randomly select other frames within the same batch as negative samples (in-batch sampling). Unfortunately, this approach resulted in worse performance on PHOENIX-2014T and CSL-Daily datasets. This decline is because the model may use non-sign language features for contrastive learning, such as signer characteristics and background elements.
* We have found that sampling within the same video has distinct advantages. It natively ensures consistency of non-sign features, allowing the contrastive learning process to focus on sign-specific features. This is our best practice.
**W2: Looking forward to more results on gloss-based settings**
Thank you for your insightful question. We appreciate your interest in applying our contrastive learning method to the gloss-based approach. We have shown some results in Figure 3 and Appendix A.3.3, which indicate the proposed SignCL method also benefits the gloss-based method. To further validate this, we have applied SignCL to gloss-based feature extraction (e.g., Self-Mutual KD[25]) and translation methods (e.g., Joint-SLT[5]. The results indicate that SignCL can indeed enhance fully gloss-based SLT).
| **Methods / Feature Extraction** | **PHOENIX-2014T** | |
|--|--|--|
| | **WER ↓** | **B@4 ↑** |
| Joint-SLT / Self-Mutual KD| 25.38 | 22.79 |
| + SignCL into Feature Extraction | 24.76 | 23.23 |
| + SignCL into Downstream Training | 25.12 | 22.92 |
| + SignCL into Two States | **24.58** | **23.46** |
---
Rebuttal 2:
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. Our response has been available for several days now. With the discussion deadline approaching, we kindly inquire if there is any further information or clarification you might need from our side.
In our response:
> We have included additional margin distribution statistics and sensitivity analysis. The experiments demonstrate that the SignCL method is not sensitive to the margin parameter.
>Additionally, thank you for your insightful suggestion, we have applied SignCL to the Gloss-based method, which has also shown improved performance.
We would greatly appreciate your prompt response and thank you once again for your valuable comments and insightful suggestions.
We apologize if this follow-up causes any inconvenience. Hope you have a great day!
Title: Kindly Follow-up on the Discussion | null | null | null | null | null | null |
Constrained Synthesis with Projected Diffusion Models | Accept (poster) | Summary: This paper proposes an approach to sample generation using diffusion models which adheres to a set of constraints. The approach is based on the score matching formulation of diffusion models, and applies a projection step which finds the nearest feasible sample to each iteration of SGLD. A theoretical justification for the method is proposed, which provides intuition around why the approach yields superior results to conditioning and post-processing methods. The method is evaluated on a wide variety of experiments covering multiple classes of constraints (including non-convex) and data modalities, and is demonstrated to yield high-quality samples which strictly adhere to non-trivial constraints.
Strengths: This is a very strong submission which is well-written and easy to read. The method is very sensible, and described clearly.
The experiments are diverse and interesting, spanning multiple application domains, data modalities (e.g., images, configuration space trajectories) and constraint types. In particular, the motion planning experiments with non-convex constraints are particularly interesting to me since synthesis with non-convex constraints has not really been approached in prior works.
Furthermore, the demonstration of synthesis with constraints applied only at test time, with the constraints not being applied during training and having the training data violating the constraints is also very appealing.
Finally, the authors also provide some theoretical justification and intuition of the proposed approach for convex constraints, which is a nice addition.
Weaknesses: I think a mathematical description of the projection operator and constraints for the experiments 5.1-5.3 would be helpful to understand the degree of non-convexity and difficulty for these experiments. While I have a rough idea, being concrete would help a lot.
Also, I believe it is important to be more forward in the main document around the computational overhead when it comes to applying the projection operator at each SGLD iteration. While I believe that high overhead does not particularly detract from the contribution (this can be addressed in future works), I think it detracts from the paper to hide this information in the appendix and not refer to it obviously in the main text.
Technical Quality: 4
Clarity: 4
Questions for Authors: line 158: Does "convergence of convex constraint sets" simply mean that the projection step will always return a feasible solution? Or are you referring more broadly to the DDPM iterates?
line 186: Double colon typo "::"
Fig. 1: Given that the approach projects to the constraint set at each iteration, why are we still seeing constraint violation early on in the diffusion process? Is this plotting the pre-projected samples? It is unclear what is actually being plotted.
line 272: typo, "unfeasible"
line 331: some comment around the reasonability of the assumption of the score function being convex (locally for small step size say?) would be appreciated
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have clearly described the limitations of the approach, however I believe a clearer statement of compute overhead would improve the clarity around the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for their time reviewing our work and their praise of our submission. We appreciate your consideration of our work and would like to address your outstanding questions and concerns.
>**Weakness 1: I think a mathematical description of the projection operator and constraints for the experiments 5.1-5.3 would be helpful to understand the degree of non-convexity and difficulty for these experiments. While I have a rough idea, being concrete would help a lot.**
- **Constrained Materials:** For this experiment we use a classical knapsack constraint. Note that while in general knapsack problems are NP-complete, the version adopted (which uses integers as weight) is known to be weakly NP-complete and admits a fully polynomial approximation. It is solved efficiently in $O(nm)$ where $n$ is the number of pixels and $m$ is the number of values to compute for the dynamic program.
- **3D Human Motion:** This is a scaling constraint that adjusts all positions of the figure by the relative distance between the lowest joint and the floor, adjusting the relative position to prevent floating and penetration i.e.,
$ argmin ||y-x||$ s.t. $\min_i (y_i) = 0 \quad \text{and} \quad ||y_j - y_k|| = ||x_j - x_k|| \forall j, k$.
Additionally, we impose a realism constraint on appendages, keeping them consistent with one another i.e.
$||elbow_\textrm{right} - wrist_\textrm{right}|| = ||elbow_\textrm{left} - wrist_\textrm{left}||$. The implementation runs in $O(n)$ time, where $n$ is the length of the internal representation.
- **Constrained Trajectories:** This problem primarily represents constraints as a minimum distance, $d$, between the center point of an obstacle, $q$, and the closest point falling between each set of consecutive points. More formally, this is expressed by evaluating the distance $d_{\text{min}}$ from $q$ to the line segment $\overline{p_i p_{i+1}}$, where $d_{\text{min}}$ is determined as follows: if the projection of $q$ onto the line defined by $p_i$ and $p_{i+1}$ falls within the segment, $d_{\text{min}}$ is the perpendicular distance from $q$ to the line; otherwise, $d_{\text{min}}$ is the distance from $q$ to the nearest endpoint, $p_i$ or $p_{i+1}$. The constraint is thus $d_{\text{min}} > d$, ensuring that the nearest point on the segment to $q$ is at least a distance $d$ away.
The interior point method used by the nonconvex solver in our implementation [27] has a time complexity of $O(n^{3.5})$ where $n$ is the number of variables in the Quadratic Program. In this case, there are 128.
- **Physics-informed Motion:** This is similar to Section 5.2 and runs in $O(n+m)$ where the size of each image is $n$ by $m$.
We appreciate this suggestion and will add the formalization of these operators to our next draft.
>**Weakness 2: Also, I believe it is important to be more forward in the main document around the computational overhead when it comes to applying the projection operator at each SGLD iteration. While I believe that high overhead does not particularly detract from the contribution (this can be addressed in future works), I think it detracts from the paper to hide this information in the appendix and not refer to it obviously in the main text.**
We are open to additional suggestions on how to more obviously showcase this component in our paper. While we do explicitly reference Section F in our Discussion and limitations section (Section 7), we were unable to make space to display this table in the nine pages of the main paper allowed for submission. This is definetively something we will address in the extra page allowed for the accepted version.
>**Question 1: line 158: Does "convergence of convex constraint sets" simply mean that the projection step will always return a feasible solution? Or are you referring more broadly to the DDPM iterates?**
This line specifically refers to the latter, as **by proximal gradient descent theory we can guarantee convergence to both a feasible and optimal solution when the problem is convex.** You are also correct in that PDM _guarantees_ feasiblity for convex constraints!
>**Question 2: Fig. 1: Given that the approach projects to the constraint set at each iteration, why are we still seeing constraint violation early on in the diffusion process? Is this plotting the pre-projected samples? It is unclear what is actually being plotted.**
Figure 1 shows the samples after the gradient step is applied (line 6 of Algorithm 1) but **prior to the projection.** The point we would like to illustrate here is that later, in the reverse process, the projections have minimal impact on the sample as additional diffusion steps do not result in constraint violations. This also means that the samples do not directly fall on the constraint boundaries as they converge within the feasible region. Conversely, when post-processing a conditional model's outputs (*Cond+*), the sample is dramatically altered by this projection, resulting in _much higher FID scores_ and _deviation from the real data distribution_.
---
We appreciate your review and feedback and are happy to address any other questions you may have. It is our hope that these responses have addressed any concerns that you may have and provided confidence in supporting even further our work! Thank you for your time and consideration.
---
Rebuttal 2:
Comment: We thank Reviewer p9zr for their thoughtful feedback and for recognizing the strengths of our submission and praising our experiments as _diverse and interesting_. We appreciate the opportunity to clarify the aspects of our work that you highlighted in your review.
In our main rebuttal, we addressed the following key points:
1. **Detailed Mathematical Descriptions**: We have provided a more detailed mathematical description of the projection operators used in experiments 5.1-5.3. This includes the specific constraints applied and their computational complexities, which help illustrate the degree of non-convexity and the challenges associated with these experiments.
2. **Computational Overhead**: We acknowledge your point and this information is now more prominently discussed.
3. **Other Clarifications**: For Figure 1: it shows the samples post-gradient step but pre-projection. This helps in understanding the iterative convergence of the method within the constraint set through the diffusion process. Additionally, we provided a clearer explanation of what we mean by “convergence of convex constraint sets” in line 158, linking it to both feasibility and optimality in the context of proximal gradient descent.
Are there any additional concerns that we could address? We are ready to provide further insights to assist in your evaluation and to enhance the understanding of our findings.
---
Rebuttal Comment 2.1:
Comment: As the discussion period is nearing its end, we wanted to ask if there are any follow-up points we can clarify. Please also note our summary in the previous comment. Many thanks! | Summary: This paper proposes Projected Diffusion Models (PDM) for constrained generative modeling. The key idea is to reframe the denoising process of diffusion models as a constrained optimization problem, iteratively projecting the generated samples onto a constraint set at every denoising step. The method is validated on several applications involving both convex and non-convex constraints, generally outperforming both conditional models and models that only project onto the constraint set after the last denoising step. The authors provide feasibility guarantees for convex constraints and optimality guarantees for convex constraints and likelihoods.
Strengths: * The paper is well-written and structured. The authors clearly describe their method, the motivation behind it, and the experimental settings it is evaluated in.
* The method is straightforward to implement and compatible with pre-trained diffusion models, only requiring access to a projection operator at inference time.
* The experiments cover a wide range of applications and constraint types, including both convex and non-convex constraints, as well as in- and out-of-distribution scenarios.
Weaknesses: * The paper heavily relies on Fréchet Inception Distance (FID) scores for the empirical comparison of models in Sections 5.1, 5.2, and 5.4. While FID is a standard metric for evaluating generative models on ImageNet-like images, it is unclear how meaningful it is in the specialized domains explored in this work (e.g., material microstructures, human poses, physics-informed simulations).
* The paper presents PDM as a novel optimization technique that "recasts traditional denoising strategies as a constrained optimization problem". However, if I understand it correctly, it is an application of commonly employed post-processing projections (e.g. the references provided in the manuscript) to the Langevin MCMC sampling scheme of Song and Earmon, NeurIPS 2019. It would be helpful if the authors could clarify this and outline any additional novelty/contributions.
Technical Quality: 2
Clarity: 3
Questions for Authors: * The implementational details in Appendix F state that all experiments were carried out with $T=10$ diffusion time steps. This is a very small value, compared to the hundreds or thousands of time steps that are used in standard image diffusion papers. Is there a reason for this?
* How many samples were used to compute the quantitative performance metrics reported in Section 5? Would you expect a change in relative performance when using the same inference-time compute budget, i.e., generating 50% more conditional samples in the Constrained Materials application, since the conditional model is ~50% faster?
* Figure 2 visualizes the constraint satisfaction rate of a conditional model as a function of the relative error tolerance, given in percentage points. What does an error tolerance of 100% correspond to?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The main methodological limitation of the proposed technique is its increased computational cost. The authors adequately address this limitation and suggest different approaches to overcome it. As outlined above, I believe that the main limitation of the experimental evaluation presented in the manuscript is its reliance on the FID metric.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We will include abridged versions of our responses in this rebuttal window, but we ask that the reviewer refers to our complete answers in the comments.*
Thank you for your time and efforts in providing feedback on our paper. We appreciate your acknowledgment of the diversity of our applications and constraint sets, as we consider this to be a significant component of our empirical validation of the proposed method. Let us emphasize that PDM reports state-of-the-art results in these diverse domains, outperforming baselines [5,11] in real-world microstructure synthesis when evaluated using FID, heuristic-based analysis, and constraint adherence, providing the first method we are aware of to _report zero-violations in human motion synthesis_, surpassing the current state-of-the-art in diffusion-based trajectory optimization reporting perfect feasibility and identical path length, and demonstrating applicability to video generation with complex, ODE-based constraints. Not only the range of domains we evaluate our method on exhibits its robustness across various constraint sets and data settings, but we also provide solid theoretical arguments for these behaviors. We hope you would agree that this demonstrates the generalizability of our framework and provides a compelling reason for why our paper should be positively considered.
>**W1: The paper heavily relies on Fréchet Inception Distance (FID) scores for the empirical comparison...**
First, we would like to point out, as the reviewer has acknowledged, that FID is a standard metric for evaluating generative models, and evaluation using this metric is _more than reasonable_. However, we believe the reviewer may have missed that **the paper indeed already reports several additional, domain specific metrics to supplement these results**. For Sections 5.1, 5.2, and 5.3 we include additional metrics (please see Section E.1, Section E.2, and Figure 6). Additionally, we will highlight that the baselines in Sections 5.2 and 5.4 use FID as their primary metric for evaluation ([29,30] and [26]).
>**W2: The paper presents PDM as a novel optimization technique that "recasts traditional denoising strategies as a constrained optimization problem"... it is an application of commonly employed post-processing projections...**
Note that, we have indeed acknoledged that some post-processing steps have been proposed previously, and indeed we compare aginst such methods. However, these methods post-process **after** the sampling process, which is a key difference, in light of our theoretical analysis. As discussed in Section 6, our approach is based on the insight that the cost of projection increases with the number of unconstrained steps (see also the illustration in Section E.3). Crucially, we have shown that post-processing approaches produce samples of much lower quality than those produced by PDM, improving material synthesis FID scores by over 30%, feasibility rates of tarjectory optimization by 90%, and increasing the quality of physics-based video generaiton by a factor of two. We argue theoretically, and empirically demonstrate, that a single post-processing projection leads to a significant divergence from the distribution in all the settings we examined. **This is a key novel contribution of this paper**.
>**Q2: Would you expect a change in relative performance when using the same inference-time compute budget, i.e., generating 50% more conditional samples in the Constrained Materials application, since the conditional model is ~50% faster?**
For the example that was referenced, as one may extrapolate from Figure 2, while the conditional model may generate 3000 samples within the time span it took for PDM to generate 2000 samples, if the error tolerance is less than ~35% (which is an absurdly high margin in this setting!!) then PDM generates many more feasible samples than the conditional model within the same compute budget. This discrepency is further emphasized when the tolerance is within a more reasonable margin, such as 5% where **PDM generates nearly seven times as many feasible samples within the same compute budget.** Hence, when constraints are integral to the outputs, PDM does, in fact, outperform conditional models in relative speed as well!
>**Q3: Figure 2 visualizes the constraint satisfaction rate... what does an error tolerance of 100% correspond to?**
When representing the porosity levels, we provide a percentage of pixels that should be below a provided threshold, as dark regions of the image represent damaged regions of the microstructure. For example, if an image had 50% porosity and 40% porosity was specified, the error tolerance that would make this feasible is 10%. Notice that our proposed method **guarantees** constraint adherence here, which is key for the scientific application tested.
>**L1: The main methodological limitation... is its increased computational cost... the main limitation of the experimental evaluation presented in the manuscript is its reliance on the FID metric.**
First, we would agree with the reviewer point about increased computational cost. This is an inherent byproduct of constraining any optimization problem, especially when providing guarantees upon the adherence to the constraint set. However, we would point you to our response to Question 2 for more context on this overhead. Additionally, we will note that in many settings, including those studied in this paper, *constraint-agnostic model cannot be used due to the necessity of adhering to the constraint set.*
Second, we will point the reviewer to our response to Weakness 1. We hope that the reviewer will take the opportunity to revisit the additional metrics that we bring attention to here and also to examine the evaluation criteria used by the referenced baselines. As this is the primary justification provided for the score that the reviewer provided, we would ask you to consider raising your score with this provided context.
---
Rebuttal 2:
Comment: >**Weakness 1: The paper heavily relies on Fréchet Inception Distance (FID) scores for the empirical comparison...**
First, we would like to point out, as the reviewer has acknowledged, that FID is a standard metric for evaluating generative models, and evaluation using this metric is _more than reasonable_. However, we believe the reviewer may have missed the additional metrics we use which are tailored to specific domains.
- For Section 5.1, we use **the same heuristic-based metrics used by Choi et al. [5]**; we report these in Section E.1, finding the realism of PDM's generations surpass the baselines using this evaluation as well.
- Additional metrics for Section 5.2 are reported in Section E.2, although we will highlight that the compared baselines report FID as their primary metric for generation quality [29,30].
- Section 5.3 uses **domain specific metrics from the state-of-the-art baseline [3]**.
- Finally, we highlight that the primary metrics used by the baseline in Section 5.4 [26] is a variation of FID score, making this an appropriate point of comparison.
Thus, we believe our use of FID score is indeed appropriate for the settings explored, especially as **the paper indeed already reports several additional metrics to supplement these results**.
>**Weakness 2: The paper presents PDM as a novel optimization technique that "recasts traditional denoising strategies as a constrained optimization problem". However, if I understand it correctly, it is an application of commonly employed post-processing projections (e.g. the references provided in the manuscript) to the Langevin MCMC sampling scheme of Song and Earmon, NeurIPS 2019. It would be helpful if the authors could clarify this and outline any additional novelty/contributions.**
Thank you for this question. As already discussed in our related work section, the novelty of this paper arises from (1) framing the reverse diffusion process as a constrained optimization problem, (2) formulating and implementing **general** projections for constraints and physical principle with important relevance for scientific and engineering applications, and (3) proposing a novel theoretical analysis to show that constraint adherance is not only feasbile, but guarantees can also be attained in many important classes of constraints with significant application relevance.
Note that, we have indeed acknoledged that some post-processing steps have been proposed previously, and indeed we compare aginst such methods. However, these methods post-process **after** the sampling process, which is a key difference, in light of our theoretical analysis. As discussed in Section 6, our approach is based on the insight that the cost of projection increases with the number of unconstrained steps (see also the illustration in Section E.3). Theorem 6.2 supports this by showing that projection cost is lower when the sample starts from the feasible set, leading to better convergence properties. We explain that this is because high projection cost results in significant divergence from the distribution in all settings we explored. To our knowledge, this is the first theoretical result on constraint adherence in diffusion models, marking a key contribution of our work.
Crucially, we have shown that post-processing approaches produce samples of much lower quality than those produced by PDM, improving material synthesis FID scores by over 30%, feasibility rates of tarjectory optimization by 90%, and increasing the quality of physics-based video generaiton by a factor of two. We argue theoretically, and empirically demonstrate, that a single post-processing projection leads to a significant divergence from the distribution in all the settings we examined. **This is a key novel contribution of this paper**.
Additionally, similar to the theory behind projected gradient descent, using projections throughout the reverse process is advantageous as we strive to reach the constrained minimum. Once again, this is a key novel contribution of our work.
Projections help guide the optimization process toward a region of the distribution that satisfies the constraints. Without regular projections, the optimization path may venture far outside the feasible space, potentially resulting in poor convergence properties, as we demonstrate in several experiments in the paper. In many ways projections guide the diffusion process of PDM in a similar spirit to the effect of conditioning *Cond* models, with the exception that this guidance imposes hard constraints on the generation process which we have shown are vastly more reliable than state-of-the-art conditioning techniques.
(1/3)
---
Rebuttal 3:
Comment: >**Q1: The implementational details in Appendix F state that all experiments were carried out with $T = 10$ diffusion time steps...**
This is actually *standard for the score-based models we used*. Specifically, we would like to point the reviewer to Section 5 of [24] where they explain in the "Setup" paragraph that they set this value to 10 (note that these authors use $L = 10$, which is equivalent in their notation).
>**Question 2: How many samples were used...? Would you expect a change in relative performance when using the same inference-time compute budget, i.e., generating 50% more conditional samples in the Constrained Materials application, since the conditional model is ~50% faster?**
For the computed metrics, we used 2000 samples from each model for FID metrics and additional metrics provided in Section E. You bring up an interesting point as to the relative performance, which we assume to refer to the number of generated feasible samples, within a given given compute budget. For the example that was referenced, as one may extrapolate from Figure 2, while the conditional model may generate 3000 samples within the time span it took for PDM to generate 2000 samples, if the error tolerance is less than ~35% (which is an absurdly high margin in this setting!!) then PDM generates many more feasible samples than the conditional model within the same compute budget. This discrepency is further emphasized when the tolerance is within a more reasonable margin, such as 5% where **PDM generates nearly seven times as many feasible samples within the same compute budget.** Hence, when constraints are integral to the outputs, PDM does, in fact, outperform conditional models in relative speed as well!
>**Q3: Figure 2 visualizes the constraint satisfaction rate... what does an error tolerance of 100% correspond to?**
When representing the porosity levels, we provide a percentage of pixels that should be below a provided threshold, as dark regions of the image represent damaged regions of the microstructure. Note that in our generated data higher porosity levels have more dark regions than lower porosities. In Figure 2, the x-scale represents the deviation between the generated data's porosity and the specified porosity. For example, if an image had 50% porosity and 40% porosity was specified, the error tolerance that would make this feasible is 10%. A reasonable error tolerance would likely be below 10%, although for precision applications this would still be far be too high. Notice that our proposed method **guarantees** constraint adherance here, which is key for the scientific application tested.
*A note on error tolerance in Figure 2:* We have been working directly with a material scientist collaborator in this domain, and for their applications it is _necessary_ to generate results which report zero-violations, making wide error tolerances inviable for their work. This was a key motivation for our development of PDM. We would like to emphasize that generation of microstructures for energetic materials is a critical real-world problem in material science, presenting unique challenges such as data scarcity and the need to satisfy out-of-distribution constraints. This experiment has significant practical implications for the creation of new material structures, and our results are undergoing testing in laboratory settings. This speaks about the significance of our method.
>**L1: The main methodological limitation... is its increased computational cost... the main limitation of the experimental evaluation presented in the manuscript is its reliance on the FID metric.**
First, we would agree with the reviewer point about increased computational cost. This is an inherent byproduct of constraining any optimization problem, especially when providing guarantees upon the adherence to the constraint set. However, we would point you to our response to Question 2 for more context on this overhead. Additionally, we will note that in many settings, including those studied in this paper, *constraint-agnostic model cannot be used due to the necessity of adhering to the constraint set.* Our method is specifically tailored to scientific and engineering applications and the associated challenges including settings with low data, the absence of meaningful conditioning values, out-of-distribution generation, and particularly when exact constraint satisfaction in integral to the data quality. In such settings, conditional models often cannot be used.
Second, we will point the reviewer to our response to Weakness 1. We hope that the reviewer will take the opportunity to revisit the additional metrics that we bring attention to here and also to examine the evaluation criteria used by the referenced baselines. As this is the primary justification provided for the score that the reviewer provided, we would ask you to consider raising your score with this provided context.
(2/3)
---
Rebuttal 4:
Comment: ---
Once again, we would like to express our thanks for your review. We have addressed each question and concern presented in your review. We believe our thorough responses provide the necessary clarifications to justify the significance of our work.
With these points in mind, we would greatly appreciate if you would re-evaluate your score to reflect the merit and significance of our contributions. If there are any additional specific reasons for the current assessment, we would appreciate further clarification and would be happy to discuss further. Thank you for your consideration!
(3/3)
---
Rebuttal 5:
Comment: We appreciate Reviewer yvsu’s detailed assessment and are grateful for the recognition of our _paper’s methodological clarity_ and _comprehensive experimental scope_.
In our main rebuttal, we addressed the following key points:
1. **Use of FID Scores**: Our evaluation does not only consider FID scores (which is the standard metric for evaluating diffusion) but also included results on constraint satisfiability at various fidelity levels, and provided by additional domain-specific metrics reported in our experiments (see full response for details).
2. **Optimization Technique**: The distinctiveness of our method from traditional post-processing projections relies on the integration of projections directly into the Langevin dynamics, which is a novel contribution to the field. This approach not only improves sample quality but also ensures constraint adherence throughout the diffusion process. The theoretical analysis is also a important novel contribution of this work which justifies our modeling choice and provides guarantees for constraint adherence for important constraint classes.
3. **Computational Efficiency**: We addressed concerns about the use of a reduced number of diffusion steps (T=10), explaining that this setting is also what is adopted by previous score-based diffusion models and aligns with our model’s efficiency and sufficiency for achieving high-quality results. We also discussed how our method maintains efficiency compared to baselines, particularly when generating feasible samples within a given compute budget, showing that our model is, in effect, much faster in such settings.
Are there any additional concerns that we could address? We are ready to provide further insights to assist in your evaluation and to enhance the understanding of our findings.
---
Rebuttal Comment 5.1:
Comment: As the discussion period is nearing its end, we wanted to ask if there are any follow-up points we can clarify. Please also note our summary in the previous comment. Many thanks!
---
Rebuttal 6:
Comment: Again, we would like to thank you for your assessment and are grateful for the recognition of our paper’s *methodological clarity* and comprehensive *experimental scope*. It has come to our attention that you have updated your score to a 5. As we have not had the opportunity to engage with you during this discussion period, we would like inquire as to what remaining concerns you have preventing you from advocating for strong acceptance? We believe we have addressed all the questions that were in your original review and would welcome the opportunity to discuss any outstanding doubts.
---
Rebuttal Comment 6.1:
Comment: My apologies, there was an issue with the visibility settings of my original response. Here it is, including a discussion of my remaining concerns:
I would like to thank the authors for the detailed response.
---
**Re W1:** The paper heavily relies on Fréchet Inception Distance (FID) scores for the empirical comparison...
> First, we would like to point out, as the reviewer has acknowledged, that FID is a standard metric for evaluating generative models, and evaluation using this metric is more than reasonable.
Perhaps there is a misunderstanding. My concern was that FID is a standard metric for evaluating generative models **trained on natural image datasets**, since it relies on the last layer of an Inception v3 model that was trained on ImageNet. It is still unclear to me why it would be "more than reasonable" to expect these representations to facilitate meaningful performance comparisons when applied to material microstructures, 3D human poses, etc. since these application domains are strongly out-of-distribution with respect to the ImageNet data the Inception model was trained on.
That being said, I appreciate the detailed clarifications and the pointers to alternative performance metrics. As far as I could tell, most of them are constraint satisfaction metrics (i.e. the *success percentage* in Figure 6 and the *penetrate* and *float* distances in Table 1), rather than sample quality metrics. However, the heuristics in Figure 13 do suggest that PDM matches the properties of the ground truth data better than conditional models on two of the three presented metrics. In light of this, I will raise my score by 1, but as outlined above my main concern regarding the applicability of FID to the settings in Sections 5.1, 5.2 and 5.4 remains.
---
Reply to Comment 6.1.1:
Comment: Thank you for clarifying where you stand with respect to our rebuttal. As demonstrated by the heuristic-based metrics in Figure 13, we have been intentional in selecting domain specific metrics for the experiments we have conducted. It appress that, as you have acknowledged the inclusion of these metrics for Section 5.1, your only remaining concern is the use of FID score in Sections 5.2 and 5.4.
As we pointed out in our original response, the use of FID scores for these experiments *comes directly from the literature/baselines with which we compare*. FID score remains the predominant metric used for evaluation of the HumanML3D dataset used in Section 5.2. This is not a metric we just decided to use, but a metric that is necessary to compare to existing literature. While you may make the claim that this is not an appropriate metric, we would argue that this is the most appropriate metric as it is the only one that allows us to compare to existing work. **When working on these problems that the community is adopting as benchmarks it seems most appropriate to use the metrics that have already been selected.**
A similar case can is made for Section 5.4. As this evaluation metric is used throughout the diffusion model literature, **and particularly in the literature we are comparing to,** we’d ask to please not punish us for something as accepted and recognized in the community. | Summary: This paper proposed Projected Diffusion Models (PDM) inspired by stochastic gradient Langevin dynamics, to generate samples that satisfy given arbitrary constraints and remain within the specified regions. The authors claimed that the proposed algorithm is compatible across various applications, including satisfying morphometric properties when synthesizing materials, physics-informed motion generation, constrained path planning and human motion generation, and provided theoretical analysis to guarantee the generated samples reside in the constrained regions.
Strengths: 1. The approach this proposed achieves zero-violation to constraints during sampling process.
2. Compared against several existing approaches in the various experiments.
Weaknesses: 1. In limitation discussion, computational overhead is mentioned. How's the time complexity of PDM compared to ``Cond``, ``Cond+`` and ``Post+``?
2. Not quite sure if I really understand how ``Cond+`` is different from PDM in general except the Langevin dynamics part from the descriptions but perceive the differences between them and if I am correct, ``Cond`` is in classifier-free guidance regime, ``Cond+`` is in classifier guidance regime and ``Post+`` only applies projection once at the end of sampling process. It would be better to describe them using mathematical expressions.
Not a serious problem, but for the writing style, I found it a bit hard to follow when cited papers stays in the middle of the sentence, such as, ``, emulating the post-processing approaches of [10, 21] in various domains presented next.`` from line 190-191, but I do not really know what 10 and 21 are about and what authors point to unless I check the reference list. It also looks a bit weird to me: ``The implementation of this model is as described by Ho and Salimans.``, where no number follows after the citation.
Technical Quality: 2
Clarity: 2
Questions for Authors: Following the weaknesses above,
- For Equation (3b), it requires the whole trajectory in the reverse process satisfying the constraints, while $x_t$s' while $t$ is large are basically noise. Is this optimization setup reasonable?
- What's the time complexity to find the nearest feasible point while doing the projection? Will take it too long time? How do you choose $M$ in the algorithm 1 to control the number of Langevin Dynamics being applied?
- While doing projections, from my understanding, the projected feasible point should be on the boundary of the constraints. Does that mean the final samples will be all projected on the exact boundary then?
To summarize this question, even though PDM has better metrics than others, I still doubt the sampling distribution is not aligned with the data distribution since the samples might be all on the constraint boundaries.
- When evaluating out-of-distribution samples, is the constraint satisfaction rate the only metric?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations and positive societal impact are discusses in the paper, but negative societal is not mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We will include abridged versions of our responses in this rebuttal window, but we ask that the reviewer refers to our complete answers in the comments.*
Thank you for your valuable feedback. Before addressing your specific questions, let us emphasize the significant contribution provided by our work: Our proposed method provides constraint imposition with _formal guarantees_ for several important classes and for the first time in the general and complex form posed! All prior work has either been unable to cope with the complex constraints that are crucial for the scientific and engineering application of interest (which include ODEs, nonconvex constraint sets, and real-world, scientific applications), relied on costly post-processing methods that we have shown _fail to produce meaningful samples in the diverse and real-world set_ of studied domains, and have failed to supply formal guarantees of constraint satisfaction as provided by our work.
>**W1: How's the time complexity of PDM compared to Cond, Cond+ and Post+?**
We report the runtime difference between PDM and the other baselines in Section F. Additional details on time complexity of our projections are included in the comments below. Also, we will refer you to our response to Q2 by Reviewer yvsu, as this is closely related to your question of additional overhead (*as our PDM is much faster than conditional models when constraint adherence is necessary to generate viable samples!*).
>**W2: Unclear how Cond+ is different from PDM....**
To clarify, *Cond* is a classifier-free guidance. The conditional model uses the guidance scheme provided at the end of page 2 (line 83). *Cond+* and *Post+* are equivalent to current post-processing methods, and we would like to emphasize the significant performance gap between these methods and PDM. We appreciate your suggestion of including a more clear formalization of these methods in the paper and will do so in our final version of the manuscript. *More details provided in the comments.*
>**Q1: For Equation (3b), it requires the whole trajectory in the reverse process satisfying the constraints, while $x_t$s' while $t$ is large are basically noise. Is this optimization setup reasonable?**
Theoretically, as discussed in Section 6, this approach is based on the insight that the cost of projection increases with the number of unconstrained steps. Theorem 6.2 supports this. Importantly, this theorem shows that the projection cost is lower when the sample starts from the feasible set, leading to better convergence properties. This is further demonstrated empirically, as methods imposing constraints only at the final step perform significantly worse than PDM, especially in non-convex settings. Importantly, "performing worse" here refers *not* to constraint adherence, but to the ability to generate images from the original data distribution (i.e., high-fidelity outputs) while satisfying the imposed constraints. Indeed, we showed how projecting only in the last step results in a significant divergence from the distribution in all settings we explored. Additionally, paralleling the theory behind projected gradient descent, as we aim to converge to the constrained minimum, there is a clear benefit in using projections to guide the optimization process toward a region where the minimum satisfying the constraints can be found. Practically, we've observed that unconstrained optimization steps can deviate significantly from the feasible domain, making it difficult to converge to a feasible sample. This is effectively illustrated in Figure 1 in the paper, where our proposed guidance scheme, PDM, navigates the constraint-defined landscape to ensure convergence to a feasible sub-distribution.
>**Question 3: ... I still doubt the sampling distribution is not aligned with the data distribution since the samples might be all on the constraint boundaries.**
The projection will provide a point on the boundary of the constraints *if the original sample was infeasible.* In effect, this is why *Post+* and *Cond+* report such high FID scores, as more likely than not (for example, see Figures 2 and 8) prior to post-processing these samples are infeasible. In contrast, PDM samples converge to feasible subdistributions. For instance, notice that in Figure 1 later timesteps result in no violations of the constraints. This implies that unless subsequent gradient steps were consistently along the constraint boundaries (for all the samples visualized), *the samples output did not fall on the constraint boundaries and instead were from unique points within the feasible subdistribution.* PDM samples are "moved" to a subset of the distribution which is still optimal (maximizes the density function) but is also feasible. The projections solely enforce that the generated samples are taken from this region of the distribution.
Furthermore, FID scores are widely used in image generation tasks because of how well they benchmark the similarity between distributions. This metric is particularly useful because **it reliably asesses both sample quality and diversity;** models which lack sample diversity perform very poorly when reporting FID scores. It is _widely accepted that this is the most appropriate method for assessing how well the sampling distributions and data distributions align_ and we show how well our proposed method performs in FID scores, while also satisfying the imposed constraints.
>**Q4: When evaluating out-of-distribution samples, is the constraint satisfaction rate the only metric?**
In this setting, the FID scores of the out-of-distribution generations did not change from the scores reported for the in-distribution generations. Hence, we do not report these separately.
---
If there are any additional reasons for the current assessment, we would appreciate further justification to address any remaining concerns. Thank you!
---
Rebuttal 2:
Comment: >**Weakness 1: In limitation discussion, computational overhead is mentioned. How's the time complexity of PDM compared to Cond, Cond+ and Post+?**
We report the runtime difference between PDM and the other baselines in Section F. We would be happy to provide details on the time complexity of our projection methods:
- **Constrained Materials:** This is a knapsack constraint. While in general knapsack problems are NP-complete, the version adopted (which uses integers as weight) is known to be weakly NP-complete and admits a fully polynomial approximation. It is solved efficiently in $O(nm)$ where $n$ is the number of pixels and $m$ is the number of values to compute for the dynamic program.
- **3D Human Motion:** This is a scaling constraint that runs in $O(n)$ time, where $n$ is the length of the internal representation.
- **Constrained Trajectories:** The interior point method used by the nonconvex solver in our implementation [27] has a time complexity of $O(n^{3.5})$ where $n$ is the number of variables in the Quadratic Program. In this case, there are 128.
- **Physics-informed Motion:** This projection runs in $O(n+m)$ where the size of each image is $n$ by $m$.
As we note in Section F, these projection operations have not been optimized for runtime, and these time complexities represent an upper bound. Additionally, we will refer you to our response to Question 2 by Reviewer yvsu, as this is closely related to your question of additional overhead (*as our PDM is much faster than conditional models when constraint adherence is necessary to generate viable samples!*).
>**Weakness 2: Unclear how Cond+ is different from PDM in general except the Langevin dynamics part from the descriptions but perceive the differences between them and if I am correct, Cond is in classifier-free guidance regime, Cond+ is in classifier guidance regime and Post+ only applies projection once at the end of sampling process. It would be better to describe them using mathematical expressions.**
To clarify,
- *Cond:* is a classifier-free guidance. The conditional model uses the guidance scheme provided at the end of page 2 (line 83).
- *Cond+:* This model is identical to *Cond*, but the final output $x_1$ is projected using our projection operator $\mathcal{P}_C$. Thus, this is a baseline introduced by this work and inspired by generative models using post-processing steps.
- *Post+:* This is an unconditioned score-based model of identical architecture to PDM. Instead of constraining the entire generation as with PDM, we project only on the final output $x_1$.
*Cond+* and *Post+* are equivalent to current post-processing methods, and we would like to emphasize the significant performance gap between these methods and PDM. We appreciate your suggestion of including a more clear formalization of these methods in the paper and will do so in our final version of the manuscript.
(1/3)
---
Rebuttal 3:
Comment: >**Question 1: For Equation (3b), it requires the whole trajectory in the reverse process satisfying the constraints, while $x_t$s' while $t$ is large are basically noise. Is this optimization setup reasonable?**
Our decision to enforce constraints at every step of the denoising process is driven by theoretical insights as well as practical observations.
Theoretically, as discussed in Section 6, this approach is based on the insight that the cost of projection increases with the number of unconstrained steps (also see illustration in Section E.3). Theorem 6.2 supports this. importantly, this theorem shows that the projection cost is lower when the sample starts from the feasible set, leading to better convergence properties. This is further demonstrated empirically, as methods imposing constraints only at the final step perform significantly worse than PDM, especially in non-convex settings. Importantly, "performing worse" here refers *not* to constraint adherence, but to the ability to generate images from the original data distribution (i.e., high-fidelity outputs) while satisfying the imposed constraints. Indeed, we showed how projecting only in the last step results in a significant divergence from the distribution in all settings we explored. Additionally, paralleling the theory behind projected gradient descent, as we aim to converge to the constrained minimum, there is a clear benefit in using projections to guide the optimization process toward a region where the minimum satisfying the constraints can be found. Without regular projections, the optimization path may explore regions far outside the feasible space, potentially leading to poor convergence properties.
We also notice that this is the first theoretical result on constraint adherence in diffusion models, to the best of our knowledge, and is indeed a key contribution of our work.
Practically, we've observed that unconstrained optimization steps can deviate significantly from the feasible domain, making it difficult to converge to a feasible sample. This is effectively illustrated in Figure 1 in the paper, where our proposed guidance scheme, PDM, navigates the constraint-defined landscape to ensure convergence to a feasible sub-distribution.
Notice also that continuous projection throughout the sampling process is crucial for converging to feasible solutions in non-convex settings, as shown in Section 5.3. In these experiments, projecting throughout the sampling process allows our method to converge to feasible solutions consistently (i.e., **we never produced an unsatisfiable trajectory**!) with a single sample. In contrast, as shown in Figure 6, the *Cond+* method, a state of the art method introduced to solve this specific problem in [21] which imposes "constraints" (a post-processing step) only at the final step, was never able to correct the infeasible samples, with the solver repeatedly reporting local infeasibility. This alone should be considered as a substantial improvement over the state-of-the-art.
>**Question 2: What's the time complexity to find the nearest feasible point while doing the projection? Will take it too long time? How do you choose $M$ in the algorithm 1 to control the number of Langevin Dynamics being applied?**
For the first part of this question, please refer to our response for Weakness 1.
To answer the latter part, we use the value of $M$ used by Song et al. [24] when this architecture was proposed, finding that the samples effectively converge using this value in our experiments. Analysis of an optimal value for $M$ could be conducted but is out of the scope of the question investigated in this work, which is on demonstrating a new constraint and physical principle adherence in generative models.
(2/3)
---
Rebuttal 4:
Comment: >**Question 3: While doing projections, from my understanding, the projected feasible point should be on the boundary of the constraints. Does that mean the final samples will be all projected on the exact boundary then? To summarize this question, even though PDM has better metrics than others, I still doubt the sampling distribution is not aligned with the data distribution since the samples might be all on the constraint boundaries.**
The projection will provide a point on the boundary of the constraints *if the original sample was infeasible.* In effect, this is why *Post+* and *Cond+* report such high FID scores, as more likely than not (for example, see Figures 2 and 8) prior to post-processing these samples are infeasible. In contrast, PDM samples converge to feasible subdistributions. For instance, notice that in Figure 1 later timesteps result in no violations of the constraints. This implies that unless subsequent gradient steps were consistently along the constraint boundaries (for all the samples visualized), *the samples output did not fall on the constraint boundaries and instead were from unique points within the feasible subdistribution.* The effect of guiding the sampling with projections is that PDM samples are "moved" to a subset of the distribution which is still optimal (maximizes the density function) but is also feasible. The projections solely enforce that the generated samples are taken from this region of the distribution.
Furthermore, FID scores are widely used in image generation tasks because of how well they benchmark the similarity between distributions. This metric is particularly useful because *it reliably asesses both sample quality and diversity;* models which lack sample diversity perform very poorly when reporting FID scores. It is _widely accepted that this is the most appropriate method for assessing how well the sampling distributions and data distributions align_ and we show how well our proposed method performs in FID scores, while also satisfying the imposed constraints. We believe this is a huge strength of the proposed approach and hope you see its significance given the many scientific and engineering domains in which this can be adopted, as we demonstrate in our experiments.
>**Question 4: When evaluating out-of-distribution samples, is the constraint satisfaction rate the only metric?**
When answering this, we are presuming you are referring specifically to Section 5.4, although please correct us if that is not the case. We also report FID scores here; in this setting, the FID scores of the out-of-distribution generations did not change from the scores reported for the in-distribution generations. Hence, we do not report these separately, but they are applicable to both settings.
Additionally, we'll note that similarly various out-of-distribution constraints are added in Section 5.3 (the red obstacles in Figure 5 which were not present in the training data). Our method is equally robust in these settings, with consistent results across the studied topographies.
---
We believe that our detailed responses provide the necessary clarifications to all your questions and reinforce the robustness and significance of our work. In light of these clarifications, we kindly request that you re-evaluate your score to reflect the merit and significance of our contributions. Our work makes substantial contribution for the application of generative processes for various engineering and scientific application settings requiring satisfaction of constraints and physical rules, as we demonstrate in the paper.
If there are any additional, specific reasons for the current assessment, we would appreciate further justification to understand and address any remaining concerns. Thank you for your consideration!
(3/3)
---
Rebuttal 5:
Comment: We appreciate the feedback provided by Reviewer e4oh and are grateful that _our proposed method’s effectiveness and the theoretical underpinnings_ were recognized as strengths of our work.
In our main rebuttal, we addressed the following key points:
1. **Methodological Differences and Formalizations**: We provided a detailed explanation to distinguish our method from the baseline proposed: Cond+, Cond, and Post+ techniques. These, together with additional mathematical expressions will also be reflected in the final version of our paper.
2. **Computational Overhead**: We compared the time complexity of our Projected Diffusion Models with other models and highlighted the efficiency of our approach, especially when constraint adherence is critical, which is the case in all setting studied. For instance, in Section 5.1, with the error tolerance is within 5%, our method generates nearly 7 times as many feasible samples within the same compute budget.
3. **Optimization Setup**: As also pointed out in the paper, we remarked why projections needs to be applied to the entire trajectory. Our response included a theoretical backing and empirical evidence demonstrating the effectiveness of our approach, particularly in avoiding divergences from the target distribution.
4. **Sample Diversity**: We clarified how our projections guide the sampling to a feasible subdistribution (Figure 1), addressing concerns as to samples falling on the constraint boundary. Furthermore, we have explained how this is captured in the FID score - as it is widely accepted that this is the most appropriate method for assessing how well the sampling distributions and data distributions align.
Are there any additional concerns that we could address? We are ready to provide further insights to assist in your evaluation and to enhance the understanding of our findings.
---
Rebuttal Comment 5.1:
Comment: As the discussion period is nearing its end, we wanted to ask if there are any follow-up points we can clarify. Please also note our summary in the previous comment. Many thanks!
---
Rebuttal 6:
Comment: Thank you for continuing to engage with us during this discussion phase. We would like to take the opportunity to respond to the points you have made.
> [W]e shouldn’t sacrifice the sample quality to [satisfy the constraints.]
While this may be the case for some applications - such as recreational image generation, the opposite is true in scientific and engineering applications that motivate this study. When considering the practicality of diffusion models to these domains, the ability to satisfy constraints is integral to the viability of the outputs! In such settings, high quality samples are desirable, making the FID score still relevant, **but feasible samples are necessary**. Thus, it is not sufficient to merely provides samples which report well on this single metric, making our method stand out. *Furthermore, if FID was the only metric that mattered in these settings, post-processing methods would never have been proposed in previous work, as empirically we have demonstrated that these provide much worse FID scores.* Yet these methods have still found an important place in existing discourse!
Note that this paper deals with real scientific and engineering settings presenting unique challenges such as data scarcity and the need to satisfy out-of-distribution constraints. In many scientific applications, such as the material science application we study, data collection is extremely expensive (it means synthetizing new materials) and the data collected may not provide feasible samples. This is exactly the in our material science application (see Section 5.1). There, it is essential that exact morphomoteric properties are satisfied, but these settings were never observed nor measured before!
We remind the reviewer here that these constraints are imposed by expert material scientist with whom we collaborate and whose work particularly motivated this paper. The settings explored require stringent constraint adherence for the laboratory testing (which our work is currently undergoing attesting for the broader impact of our method).
Similarly, constraints are necessary to generate accurate simulations (Sections 5.2 and 5.4) and clearly integral to trajectory optimization. Samples which are not feasible provide no merit when exact constraint satisfaction must be considered. The ODE constrains enforced to capture the motion of free-falling object also allow us to generate objects falling on other planets, when such data is unlikely to be collected (see our simulations in Section 5.4).
_Next, let us note a simple but important fact pertaining to constrained optimization in general_: when imposing constraints on an optimization procedure, inherently the feasibility space shrinks, and thus, of course, the cost objective typically is higher (in case of a minimization procedure) than that associated to an unconstrained one. However, when constraints are part of the optimization problem, solutions (or samples) which do not satisfy the constraints **cannot be considered “optimal”, or better solutions than feasible ones**. In our work, infeasible samples are not viable.
We would also like to highlight that the “much lower and better FID score” that is referred to in the response cannot be directly compared as Yuan et al. uses an **external simulator to post-process their outputs**; here, the simulator takes as input a vector of points produced by the generative model and is crucial to produce realistic results. This dramatically alters the diffusion model’s outputs! The baseline we provide is a more apt point of comparison with regard to the FID scores because it does not rely on external simulators, thus effectively compares the ability of generative models.
Also note that in all experiments our FID scores are *very close or outperform the conditional model* (for instance Section 5.1 outperforms in terms of FID and heuristic-based metrics and Section 5.3 outperforms in the domain specific metrics). Furthermore, given that in many of these settings the training data is not necessarily feasible, sampling from a subdistribution of this data set will inherently result in FID score increases (as the diversity of the samples is limited by some margin for this conditional distribution).
While it may make sense to still use a conditional model in settings where constraint adherence is not valued, we will reiterate that in many settings this necessary. Hence, we disagree with the “blanket statement” provided.
(1/2)
---
Rebuttal 7:
Comment: > [T]he proof provided in the section 6 assumes the feasible region is convex, which is very strict and doesn’t apply in many cases.
From an optimization perspective, assuming convexity of the constraint set is more than reasonable. In fact, when providing convergence guarantees, this is almost universally assumed (as guaranteeing convergence of non-convex constraint sets is an unsolved problem).
While the theoretical guarantees hold for convex settings, our experiments extend to non-convex constraints as well (see Section 5.3). In constraint optimization, this is reality that is widely accepted as also supported empirically by our results.
Please notice that *inclusion of theoretical results is meant to strengthen the understanding of our work, and should not be considered as a weakness.*
---
We hope our responses clarify your points. Are there any additional concerns that we could address? We are ready to provide further insights as needed.
(2/2)
---
Rebuttal Comment 7.1:
Comment: Thank you for your response forwards my concerns. I still believe the algorithm that directly projecting sample to the feasible region at every Langevin dynamics sampling step is the **flaw** it possesses, which can be reflected onto the **high FID score** from experiment 5.1. As what I mentioned from the last round of response, ``Theoretically, the step size in the Langevin dynamics should be chosen very carefully such that the injected noise brings the randomness while not messing up the effect from the gradient step. in this case, there is no guarantee, and I also kindly disagree that the projection can still maintain the sample at its' noisy level and this will definitely mess up the sampling scheme, or the noise level should be correspondingly adjusted according to the projection but that seems infeasible`` is the reason why **the direct projection will make the algorithm deficient**, but not all projection ideas would not work. For example, [1] also has the idea that bounces the samples back to the feasible regions through the whole reverse sampling process but it is with reasonable assumptions and justifies why this would not mess up the sampling scheme as well, which makes more sense. [1] also pointed out, in its introduction section, ``Although thresholding avoids failure, it is theoretically unprincipled because it leads to a mismatch between the training and generative processes.``, which is also the information that I tried to convey for the whole time.
I firmly believe that the samples satisfying constraints are necessary, but if they are not generated in the way that follows the principle of how diffusion models train and sample, and how the theory works behind, the samples are meaningless from my point of view.
[1] Lou, Aaron, and Stefano Ermon. "Reflected diffusion models." International Conference on Machine Learning. PMLR, 2023.
---
Reply to Comment 7.1.1:
Comment: Thank you for your continued discussion with us.
Firstly, please note that we report very strong empirical results in our paper, demonstrating state-of-the-art in Section 5.1 and 5.3 *(we believe you referencing “high FID score from experiment 5.1” may be a typo here)* and state-of-the-art for constraint satisfaction methods on Sections 5.2 and 5.4, dramatically out-performing post-processing techniques which have found an important place in the literature.
Your concerns seem now to be more theoretically-based. First, we would encourage you to look at our discussion with Reviewer Jdpw, where we provide additional discussion for our formulation of the reverse process as an optimization problem. We have **proved convergence properties of our method in Section 6, under standard convexity assumptions**. However, let us go further - **[A] proves convergence for non-convex optimization using Langevin dynamics**, making our method broadly applicable, and again our strong results are empirical evidence of this.
As we have shown that the reverse process is the minimization of $- \log q(x)$, our method adapts the reverse process from a simple variation of gradient descent (SGLD) to a variation of projected gradient descent. **Projected gradient descent is known to converge (even in non-convex settings)** [B].
Our samples are not “meaningless”. We have provided rigorous theoretical justification (and an excess of empirical results), and provided evidence that this work could be a significant resource to adopt generative models in many engineering and scientific domains where physical principles and user-imposed constraint must be satisfied for the outputs to be considered as “valid”. We do hope that you agree with us on the significance and potential broader impact of these results, given the evidence provided.
---
[A] Raginsky, Maxim, Alexander Rakhlin, and Matus Telgarsky. "Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis." Conference on Learning Theory. PMLR, 2017.
[B] Vu, Trung, Raviv Raich, and Xiao Fu. "On convergence of projected gradient descent for minimizing a large-scale quadratic over the unit sphere." 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2019. | Summary: This paper proposes a diffusion model that imposes constraints on the generated output. However, the constraints here are not abstract verbal instructions but rather formalizable constraints. The authors propose projected diffusion model sampling to perform constraint conditional log-likelihood maximization at each time step. They demonstrate the effectiveness of the proposed method on a variety of application problems.
Strengths: * The paper demonstrates the effectiveness of the proposed method under various constraint conditions.
* Section F reports on the impact of projection on computational cost.
* The paper compares the proposed method to appropriate existing methods, such as post-processing correction.
Weaknesses: 1. Equation (3) is introduced without a clear explanation of the relationship between the reverse diffusion process of score-based models and maximizing the conditional density function.
2. The details of the projection algorithm used in each experiment are only described in words. Pseudocode would be helpful.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Please provide a more detailed explanation of the similarity between the reverse diffusion process of score-based models and maximizing the conditional density function.
2. As mentioned in Reference [20], it is common to gradually decrease the learning rate and $\gamma$ to improve the convergence rate of the proximal algorithm. Have you tried decreasing $\gamma$ as $1/i$ or $1 / \sqrt i$ within the inner loop of projected diffusion model sampling?
3. Please provide the computational order of the projection algorithm used in each experiment.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Section 7 adequately describes the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts in providing feedback on our paper. First, let us emphasize the significant contribution provided by our work: Our proposed method provides constraint imposition with _formal guarantees_ for several important classes and for the first time in the general and complex form posed! All prior work has either been unable to cope with the complex constraints that are crucial for the scientific and engineering application of interest (which include ODEs, nonconvex constraint sets, and real-world, scientific applications), relied on costly post-processing methods that we have shown _fail to produce meaningful samples in the diverse and real-world set_ of studied domains, and have failed to supply formal guarantees of constraint satisfaction. We appreciate your acknowledgment of our use of "formalizable constraints," as we view this as a key contribution that distinguishes our work from existing literature. Let us reiterate that the method we propose is able to handle _arbitrary constraint sets_, meaning that this can generalize to *any formal constraints that are posed.* We believe these to be compelling reasons as to why our paper should be considered!
>**Weakness 1 and Question 1: Equation (3) is introduced without a clear explanation of the relationship between the reverse diffusion process of score-based models and maximizing the conditional density function.**
The objective of the reverse diffusion process is to maximize the density function, and Equation 4 shows the _actual_ update steps that are learned by the score-based model: a noisy (SGLD) gradient ascent on the density function. The formalization of the objective provided in Equation (3a) is equivalent by construction to that of the reverse diffusion process as _this is the optimization procedure that is learned during the forward diffusion process_.
The model has learned to estimate the gradients necessary to solve this optimization problem, and the update step in Equation 4, which is taken directly from Song et al. [24,25], converges to a solution (or sample) that maximizes the density function. **In short, Equation (3a) is how the reverse diffusion process would be directly formalized as an optimization problem, and Equation (3b) is our method's extension of this objective to a constrained optimization problem.** We hope this clarifies your doubt.
>**Weakness 2: The details of the projection algorithm used in each experiment are only described in words. Pseudocode would be helpful.**
We appreciate the interest in the lower-level implementation details. First, notice that the actual implementations for each of these projections **are indeed included in our submission**, and if you are interested we would encourage you to look at the provided code. Additionally, we would encourage you to refer to our response to _Reviewer p9zr Weakness 1_ where we provide the mathematical formalization of the projections.
>**Question 2: As mentioned in Reference [20], it is common to gradually decrease the learning rate and $\gamma$ to improve the convergence rate of the proximal algorithm. Have you tried decreasing $\gamma$ as $1/i$ or $1/\sqrt{i}$ within the inner loop of projected diffusion model sampling?**
Indeed, our implementation in Section D (Algorithm 2) does provide a dynamic adjustment of $\gamma$ which uses stochastic differential equations to create smoother transition kernels between timesteps [25]. We find that for our physics-informed motion experiments this produces better FID scores, but in most of our experiments, $\gamma$ has been adjusted at the outer loop level, thus while decreasing with time it does not provide as a smooth of a change as that explored in Section D, without producing meaningful alteration to the algorithm performance.
>**Question 3: Please provide the computational order of the projection algorithm used in each experiment.**
We note that additional details for each projection are also reported in Section C, as we mentioned in the main paper. We provide here additional details on the problems' complexity:
- **Constrained Materials:** This is a knapsack constraint. While in general knapsack problems are NP-complete, the version adopted (which uses integers as weight) is known to be weakly NP-complete and admits a fully polynomial approximation. It is solved efficiently in $O(nm)$ where $n$ is the number of pixels and $m$ is the number of values to compute for the dynamic program.
- **3D Human Motion:** This is a scaling constraint that runs in $O(n)$ time, where $n$ is the length of the internal representation.
- **Constrained Trajectories:** The interior point method used by the nonconvex solver in our implementation [27] has a time complexity of $O(n^{3.5})$ where $n$ is the number of variables in the Quadratic Program. In this case, there are 128.
- **Physics-informed Motion:** This projection runs in $O(n+m)$ where the size of each image is $n$ by $m$.
We will be happy to add this additional information to the appendix in our final version.
---
We appreciate the constructive feedback and have diligently addressed all the concerns raised. We believe that our detailed responses provide the necessary clarifications for the arguments presented in our paper. In light of our clarifications, we kindly request that you re-evaluate your score, particularly given that the only weaknesses identified were minor points that have now been fully addressed.
We also noticed your current score for soundness = 1. This appears to be unjustified, especially considering the rigorous methodological approach and theoretical contributions presented. If the score was intended to be revised following our responses, we hope that the comprehensive explanations provided will justify a favorable assessment. If there are any lingering doubts or additional questions, we are more than willing to discuss them further! Thank you.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response.
> The objective of the reverse diffusion process is to maximize the density function, and Equation 4 shows the actual update steps that are learned by the score-based model: a noisy (SGLD) gradient ascent on the density function.
Please cite papers that state that the objective of the reverse diffusion process is density function maximization.
I cannot find that the score-based methods maximize the likelihood of the density function in [24,25].
I believe Song et al. [24,25] are not likely-based. By modeling the score function instead of the density function, we can sidestep the difficulty of intractable normalizing constants of the density function.
My current score for soundness = 1 reflects the above concern about Equation (3).
> Indeed, our implementation in Section D (Algorithm 2) ...
Since the $\gamma$ in Algorithm 2 is not always decreasing, your answer to Question 2 is no, right?
I don't think it's a problem that the answer to this question doesn't significantly detract from the main contributions.
> Additionally, we would encourage you to refer to our response to Reviewer p9zr Weakness 1 where we provide the mathematical formalization of the projections.
Thank you for providing the answer to Question 3 and the mathematical formalization of the projections to p9zr. It is very helpful to understand the overview of the projection algorithms, and I recommend including them in the main text.
---
Rebuttal 2:
Comment: We sincerely appreciate Reviewer Jdpw’s thorough evaluation and are grateful that _our method’s effectiveness under various constraints_ and the _comprehensive coverage of computational cost impacts_ were recognized.
In our main rebuttal, we addressed the following key points:
1. **Clarification on Equation (3)**: We provided a detailed explanation of how Equation (3) relates to maximizing the conditional density function through the reverse diffusion process. The subsequent equations and the update steps, explained in our rebuttal, demonstrate the rigorous theoretical underpinnings of our model.
2. **Details of Projection Algorithm**: We have provided references to the detailed mathematical formalization and the actual code in our submission, ensuring that the implementation details are accessible and transparent. We will also provide additional formalization in the paper, based on the response above.
3. **Dynamic Adjustment of Parameters**: We clarified our method’s approach to parameter adjustment (which indeed explores a dynamic adjustment of $\gamma$ in Appendix D). A discussion on how these adjustments affect the model’s performance across different experimental setups is also included.
Are there any additional concerns that we could address? We are eager to provide further information to assist in your evaluation and to enhance the understanding of our findings.
---
Rebuttal Comment 2.1:
Comment: As the discussion period is nearing its end, we wanted to ask if there are any follow-up points we can clarify. Please also note our summary in the previous comment. Many thanks!
---
Rebuttal 3:
Comment: Thank you for your response and allowing us the opportunity to clarify further:
> The objective of the reverse diffusion process is to maximize the density function
First, let us clarify that the papers we cited [24,25] are to point out that the update step taken in our paper (before projecting) is identical to those in existing literature (for instance it is also Equation 4 in [24]). You will also notice that our Algorithm 1 directly employs this update step (line 6) as in [24]. Next, notice that immediately following the introduction of this update step in [24] it is explained that the sample converges to $p(x)$ by the repeated application of this update step under the regularity conditions in [A]. These conditions are introduced in Equation 2 of [A] and are expressly for the purpose of “convergence to a local maximum” by ensuring that the solution “will reach the high probability regions”.
This can also be directly derived from SGLD. For instance, refer to the formalization of SGLD in [B] Equation 1.2:
$$
d\mathbf{X}(t) = -\nabla F_n (\mathbf{X}(t))dt + \sqrt{2\beta^{-1}}d\mathbf{B}(t)
$$
First, notice that this matches the update step in Equation 4 (of our paper and [24]). As explained in [B], this optimization procedure “concentrates around the global minimum of $F_n(x)$” and parallels the task of directly minimizing $F_n$. Adapted to our update step (as shown below), this becomes the minimization of $-\log q(x)$ or, in other words, the maximization of the density function.
$$
d\mathbf{X}(t) = \nabla \log q (\mathbf{X}(t))dt + \sqrt{2\gamma(t)}d\epsilon(t)
$$
Additionally, we will note that [C] provides a complete proof showing that SGLD converges to an "almost-minimizer" of the function, as the noise maintains some degree of stochasticity. Hence, from these works we can support our claim that this process is "akin to maximizing the density function".
Importantly, we would like to remind the reviewer that formalizing the reverse process as an optimization problem is part of the novelty of our work. Hence, existing papers have not explicitly presented the reverse process as in our work. But *the presentation of this objective in our paper is posed to show how can one incorporate constraints in the reverse process, and how these can be attained using a gradient-based projection method.*
Finally, we obviously agree with your point about using the score function to increase tractability, but this does not change the overall objective from an optimization standpoint. Indeed again our method (Algorithm 1) directly employs the update step (line 6) used in [24] (with the variation of using a projected gradient version).
We thank the reviewer for the opportunity to clarify these points. It is our intent to update the manuscript to better explain our derivation of Equation 3a; thank you for the suggestion.
> Indeed, our implementation in Section D (Algorithm 2) ...
We apologize if our response was unclear here. We reference Algorithm 2 because it dynamically adjusts the gradients to improve convergence; however, we do not explicitly attempt the learning rate schedules referenced in the question.
---
Again, thank you for your willingness to engage with us during the discussion phase. We hope our responses clarify your points. Are there any additional concerns that we could address? We are ready to provide further insights as needed.
---
[A] Welling, Max, and Yee W. Teh. “Bayesian learning via stochastic gradient Langevin dynamics.” Proceedings of the 28th international conference on machine learning (ICML-11). 2011.
[B] Xu, Pan, et al. “Global convergence of Langevin dynamics based algorithms for nonconvex optimization.” Advances in Neural Information Processing Systems 31 (2018).
[C] Raginsky, Maxim, Alexander Rakhlin, and Matus Telgarsky. "Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis." Conference on Learning Theory. PMLR, 2017.
---
Rebuttal Comment 3.1:
Title: Thanks again
Comment: Thanks to the references you provided, I now understand how the reverse diffusion process converges to its Gibbs distribution, which concentrates on the maximum likelihood solution. I have increased the soundness score from 1 to 2. I also agree that considering the reverse diffusion process as an optimization problem is a significant contribution of this paper. However, it is a pity that the introduction of this contribution is very vague, described as "akin to maximizing the density function." Although the author mentioned a plan to revise this, I hesitate to strongly accept the paper without seeing how the revision will be made, so I would like to keep my score.
---
Rebuttal 4:
Comment: Thank you for your continued efforts to review our work. We are glad the additional clarifications have assisted in your understanding of our method.
Do we understand that there are no more limitations pending and that as you mentioned our work is sound, novel, and significant? As you have stated that the only remaining hesitation for you to increase your score is that you would like to see our revisions, let us provide the changes we intend to make:
- We will expand the beginning of Section 4 explaining our derivation of Equation 3a. We will begin by introducing the update step Equation 4 and explaining briefly explaining SGLD. This will be verbatim:
>The application of the reverse diffusion process of score-based models is characterized by iteratively fitting the initial noisy samples $x_T$ to the learned approximation of $p(x_0)$.
This optimization is formulated such that a variation of the traditional gradient descent algorithm, *Stochastic Gradient Langevin Dynamics* (SGLD), is used to iteratively transform a sample from the Gaussian distribution $q(x_T | x_0)$ to a sample from the learned distribution $q(x_1 | x_0)$. The update step is provided by:
\begin{equation}
x_{t}^{i+1} = x_{t}^{i} + \gamma_t \nabla_{x_{t}^{i}} \log q(x_{t}^i|x_0) + \sqrt{2\gamma_t}\epsilon,
\end{equation}
where $\epsilon$ is standard normal and $\gamma_t > 0$ is the step size. This step is repeated $M$ times for each $x_T$ to $x_0$. To prevent a deterministic behavior, an additional term is added to the gradient descent algorithm, $\sqrt{2\gamma_t}\epsilon$, drawing from *Langevin Dynamics* \cite{song2020score}.
>SGLD can be viewed as an extension of traditional gradient descent algorithms, where the primary goal is to minimize a specified objective function. However, SGLD incorporates a stochastic component, introducing noise into this process. Formally, this procedure converges to a region characterized as an "almost-minimizer" of the objective function, with proximity to the minimizer bounded by $\frac{d^2}{(\sigma^{1/4}\lambda^*)}\log(1/\epsilon)$, where $\sigma^2$ represents the variance schedule, $\lambda^*$ denotes the uniform spectral gap of the Langevin diffusion, and $d$ is the dimensionality of the problem, as outlined in reference [C].
>In this framework, the SGLD algorithm yields samples that are statistically concentrated around the global maximum of the underlying density function, as noted in [B]. Thus, the reverse diffusion process can effectively be approximated by minimizing the negative log-likelihood $-\log q(x_t|x_0)$, or equivalently as maximizing the density function $\log q(x_t|x_0)$ at each given noise level, the gradients of which are estimated by $s_\theta (x_t^i, t)$.
>In traditional score-based models, at any point throughout the reverse process, $x_t$ is _unconstrained_.
When these samples are required to satisfy some constraints, the objective remains unchanged, but the solution to this optimization must fall within a feasible region $C$,
\begin{equation}
\min_{x_{T}, \ldots, x_1} \sum_{t = T}^{1} -\log q(x_{t}|x_0)
\end{equation}
\begin{equation}
\text{s.t.} \quad x_{T}, \ldots, x_0 \in C
\end{equation}
> Operationally, the negative log likelihood is minimized at each step of the reverse Markov chain, as the process transitions from $x_T$ to $x_0$. In this regard, and importantly, the objective of the PDM's sampling process is aligned with that of traditional score-based diffusion models.
> To avoid low density data regions, the sample is optimized to conform to the previous distribution in the Markov chain before proceeding to the consecutive distribution, the transitions being ensured by setting $x_{t-1}^{0} = x_{t}^{M}$, where as $x_t^M$ is the final iterate of the previous time step.
- Following this, we will continue with Section 4.1.
As we believe this is a fairly simple change, requiring rearranging some text and adding a few short paragraphs, we do not mind providing this. We hope that this summary of our intended revisions, provided at your request, addresses the remaining doubts. We also notice that adding this paragraph several to only strengthen our contribution, in addition to the strong empirical and theoretical support provided for what we believe is an important contribution for the adoption of diffusion models in scientific and engineering domains. As you have stated that this is the only remaining concern, we hope that you will consider advocating for strong acceptance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Structured Learning of Compositional Sequential Interventions | Accept (poster) | Summary: Estimating the causal effect of a sequence of interventions on another sequence is a central task in causal inference / treatment effect estimation. However, for large discrete spaces, canonical assumptions such as Markovian assumptions, short sequences, and so forth will not apply, while very general black-box models may present poor generalizability due to the sparsity of the observed treatment sequences. This work presents a model where the effect of a sequence of interventions on the target sequence can be decomposed as a product of individual time-point-wise effects. This parameterization is fairly general as it parameterizes functions of a specific form exactly, and any measurable function approximately. Identification results are given for the parameter of the proposed parameterization. This motivates a VAE-based fitting approach, together with predictive intervals coming from conformal prediction. Experiments are given, showing competitive performance of the method.
Strengths: The work is original as it shows that coming with a specialized parameterization can help, both theoretically (identifiabiity) and empirically (better performance), than a general black-box model. It is also significant as the parameterization is actually not particularly restrictive, as Propositions 1 and 2 show that it can recover exactly or approximately general types of functions. Thus the method is generally very principled and should be very applicable in the field.
Weaknesses: 1) IMO the main drawback of the work is in clarity.
1a) It took me significant time to parse what the field is missing and what is the exact contribution of the method. The contribution is specified in the "Scope" paragraph l.37-42 and later in l.64-78, and the previous work with its drawbacks is scattered in l.43-63, Section 4 l.304-322 and Appendix A. I think that these sections should be completely re-organized, notably all the previous work should be put together and the contributions outlined clearly in contrast to such previous work ; especially as the contribution consists actually in specifying a more restricted functional form that's more tailored to a specific type of intervention sequence.
1b) It is not clear what the exact estimand is, or what the exact estimands are, as they are not stated in the problem statement l.79-90. The functional form $\mathbb{E}[X_n^t(d_n^{1:t}) | x_n^{1:t-1}(d_n^{1:t-1}), z_n]$ is also confusing, as it suggests that it does not depend on covariate values but only on $n$, $t$ and interventions, while from l.173 it also depends on covariates. It is only in Section 4 that I understood that I understood that only predictions were to be made conditional on previous covariates. Thus, I would be happy if authors could state in their rebuttal what estimand(s) are we looking for, and if they could write it here.
1c) It is also not clear how the functional form of Equation 1 incorporates "compositionality" of interventions. While it can be guessed from Equation 2, it is only in l.184-192 that it becomes clear this functional form for $\psi$ incorporates compositionality (if I understood it well?)
2) While experiments generally suggest the relevance of the method, I'd have a few concerns :
2a) There seems to be rather few baselines : GRU-0 and GRU-1 seem rather redundant as they look like mutilated versions of the GRU-2 baseline ; the latter seems like the only actually fair comparison against CSI-VAE as only that baseline takes the whole intervention history into account. The bottom part of Figure 2 indirectly confirms this as only GRU-2 is evaluated. Thus, all of this might leave GRU-2 as the only real baseline. Could another one be implemented?
2b) Conformal prediction is incorporated in the paper, but it is only evaluated in the Appendix and only on the method. Could an analysis include baselines and be moved to the main part?
Note : I am aware that it might not easy to obtain new empirical results in the rebuttal period, I am happy to discuss all of this with other reviewers and ACs
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) I struggle to understand what these mean, can you develop a bit?
a) l.38-39 : "including the bursts of non-stationarity that the application of an intervention brings in"
b) l.106-108 : "We do not explicitly condition on $D_{1:t}^n$ in most of our notation, adopting a convention where potential outcome indices $D_{1:t}^n$ are always in agreement with the (implicit) corresponding observed $D_{1:t}^n$".
2) l.173 : how exactly does $\mathbb{E}[X_n^t(d_n^{1:t}) | x_n^{1:t-1}(d_n^{1:t-1}), z_n]$ depend on $x^{1:T}$ exactly? Which components of the latter are used?
3) How stringent are Assumptions 3 and 4?
4) Section 3.1 :
a) Is identifiability preserved with variational inference in CSI-VAE?
b) What is $\mu$ in the equation of l.250, and how does this incorporate the functional form of Equation 1?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations have a dedicated paragraph, and social impact is no different than any other work on treatment effect estimation more broadly.
**EDIT (2024/08/11)** : increased my score following authors' rebuttal and discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your remarks that **“The work is original…”** and that is its **“generally very principled and should be very applicable in the field”**! The many comments about presentation are very useful.
**IMO the main drawback of the work is in clarity. 1a.: …time to parse…1b. It is not clear what the exact estimand is…**
The goal is "behavioral forecasting under hypothetical future interventions" (lines 41-42), where "behaviour" is defined in line 25. We want (line 87) to predict $X_{n^\star}^{T + 1:T + \Delta}(d_{n^\star}^{1:T + \Delta})$ using whatever is observed up to time $T$, extending the past $D_{n^\star}^{1:T}$ with a hypothetical plan of future actions $D_{n^\star}^{T + 1:T + \Delta}$.
We recognize that, even though this description fits standard machine learning formulations, it is typical for a causal modeling paper to have equations of estimands presented explicitly, even if this is not natural when the problem is predictive. To address this, we propose bringing predictive potential outcomes in terms of functionals earlier on in the write-up. Under the usual mean-squared error loss, this commonly boils down to the expected value of each $X_{n^\star}^{T + 1}$, $X_{n^\star}^{T + 2}$, etc. given the past up to $T$. We originally didn’t want to prescribe this because one may want to use a different loss function other than mean squared error, where plain expectations wouldn’t be the answer. So, it’s not that straightforward to introduce a single equation such as e.g. $\mathbb E[X_n^{T’}(d_n^{1:T’}) | x_n^{1:T}(d_n^{1:T}), z_n]$ for $T’ > T$ as the target estimand, because *we are not particularly committed to any given functional of the predictive distribution*: we only want to deal with (potential) observables, not with functionals (that’s why we focus on conformal prediction instead of confidence intervals, and predictive potential outcomes as opposed to cross-world causal effects).
Nevertheless, we should anticipate that some readers (like the reviewer) may find it jarring not to have an explicit estimand equation laid out right at the beginning. Given that Eq. 1 is presented just for the predictive mean, we can use this as the family of estimands. So in a revised manuscript we will further emphasize the predictive aspect within the context of means (motivated by mean squared losses) earlier on.
(we have one question for the reviewer: we didn’t quite get what the reviewer meant by “... it suggests that it does not depend on covariate values but only on $n$, $t$, and interventions, …” - the functional form listed seems to be the same as in l.173?)
**1c) It is also not clear how the functional form of Equation 1 incorporates "compositionality" of interventions... (if I understood it well?)**
The reviewer got it correctly, but as mentioned it won’t hurt to anticipate early on in the text where we are going with compositionality, including the function composition of Eq. 4.
**Experiments**
We focused on the GRU because it is a well established method for learning sequential predictions out of categorical input sequences - more flexible than RNNs, less complex than LSTMs or transformers, which would be major overkills: we already overfit with GRUs, a LSTM or beyond would not help (and did not, in preliminary experiments). We nevertheless provide updated experiments in the shared rebuttal box.
GRU-0 and GRU-1 should be seen as ablation studies. GRU-0 quantifies the total contribution of the signal coming from $D$, showing that results are not just artifacts of modeling the evaluation of the $X$ series. GRU-1 excludes long-term histories, focusing only on the latest iteration of intervention, and illustrates that a very strong Markovian structure cannot emulate long-term direct effects. Doing this in the context of GRU shows that even a flexible model that potentially overfits nevertheless benefits from long-term, non-Markovian contributions of past interventions.
Regarding conformal prediction, as the results are done in (semi)synthetic experiments we expect theory to agree with, and so they are in less need to be thoroughly empirically assessed. But we will do our best to include them in the main text in the eventual acceptance of the paper, which would allow us to have an extra page.
**1a) l.38-39**
When an intervention is applied, we expect a system to behave in a non-stationary way in the short-term, eventually settling down to a possibly new equilibrium. See for instance Fig. 1 of [9].
**1b) l.106-108**
We do not allow conditioning of $X^{1:t}$ generated under different $D^{1:t}$ other than the one used in forecasting future trajectories i.e. no cross-world counterfactuals.
**2. l. 173**
They are used in the definition of $\phi_{nl}^t$ (see line 109).
**3. Stringency Assumptions 3 and 4.**
Assumption 4 is relatively benign, basically that we don’t have linear dependency between (features of) past trajectories.
Assumption 3 essentially requires that there are enough units of time for a large enough group of individuals to be exposed to a particular treatment level $d$ prior to being exposed to something else. It will depend on how smooth a treatment effect is. Citing again Fig. 1 of [9], realistic effect shapes in many domains can be described with relatively few parameters (effect smoothly going up, then down, then settling), which should be amenable to this assumption holding in practice.
**4. Section 3.**
The VAE still uses a likelihood (decoder) where the conditional mean is given by Eq. 1, and the conditional variance is homoscedastic and easily shown to be identifiable. Identifiability of the parameters of the encoder is not fundamental (it just represents a posterior distribution, it does not specify a causal structure) and its variance will go to zero as the number of time points increases.
Many thanks again for the useful questions!
---
Rebuttal Comment 1.1:
Title: Answer to Aug 7 rebuttal
Comment: Many thanks to authors for the extensive rebuttal. My questions have been answered, except the following points :
1) I still do not understand exactly how $E[X_n^t(d_n^{1:t})|x_n^{1:t−1}(d_n^{1:t−1}),z_n]$ in Equation 1 and $\phi_l(x_n^{1:t−1}(d_n^{1:t−1}), z_n)$ in Equation 109 depend on $x_n^{1:t−1}, d_n^{1:t−1}, z_n$, or more precisely what the notation $x_n^{1:t−1}(d_n^{1:t−1})$ means. It suggests that $x_n^{1:t−1}$ is a function of $d_n^{1:t−1}$, while of course also depending on $n$ and $t$. Further, from this interpretation, it is at first unclear whether this function is known a priori (before observing the data) or pre-specified, or it is observed in the data, or it has to be learnt, etc... This is what I mean in the statement raised by your own question "we have one question for the reviewer: we didn’t quite get what the reviewer meant by" : if $x_n^{1:t−1}$ is a function that is pre-specified or known a priori, then indeed $x_n^{1:t−1}(d_n^{1:t−1})$ only depends on $d_n^{1:t−1}$ in the argument, and $n,t$ in the index.
It is later in the paper that I understood that these $x_n^{1:t−1}$ are actually values taken by the (potential) covariates, thus these $x_n^{1:t−1}$ are vectors or scalars but not functions (if I am not mistaken). Thus, it now seems to me that the $x_n^{1:t−1}(d_n^{1:t−1})$ notation for small $x$ is incorrect and should be scraped altogether and replaced with either $X_n^{1:t−1}(d_n^{1:t−1})$ with capita $X$ in statistical quantities, and $x_n^{1:t−1}, d_n^{1:t−1}$ in functions. To be more precise, I understand that one should write
a) $E[X_n^t(d_n^{1:t})|X_n^{1:t−1}(d_n^{1:t−1}) = x_n^{1:t−1}, z_n]$ in Equation 1, which makes it clear that $X_n^{1:t−1}(d_n^{1:t−1})$ is a potential covariate that is indexed at the observed intervention $d_n^{1:t−1}$ and is here taking the observed value $x_n^{1:t−1}$ ;
b) $\phi_l(x_n^{1:t−1}, d_n^{1:t−1}, z_n)$ in Equation 109, as this is a function as in l.173 and it does not formally rely on random variables, including potential covariates.
Is all of this correct? If not, is $x_n^{1:t−1}$ for a small $x$ actually a function of $d_n^{1:t−1}$?
2) My mistake, I had misread Section 3.1 and missed the point that the mean and conditional variance of l.250-251 are actually the posterior parameters of $\beta_n$ (feel free to correct me if I am wrong here). Here are some questions, mostly for clarity:
a) Are these parameters for the encoder then?
b) By the conditional mean and variance of the decoder, do you refer to Equation 233?
c) Do you also confirm that the VAE departs from the setup of Theorem 1, as notably the basis function is learnt?
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you, this follow-up is super-useful to understand where you (and other potential readers) are coming from! Yes, we used $x_n^{1:t - 1}(d_n^{1:t - 1})$ as notation for a realization of $X_n^{1:t - 1}(d_n^{1:t - 1})$. Although not unsound, we can see how this may invite unnecessary confusion. Even though using $v$ as short-hand notation for the event $V = v$ when following a conditioning bar is standard, the extra piece of notation about the potential outcome index gets in the way and it's not very common. We wouldn't want to use $\mathbb E[X_n^t(d_n^{1:t}) | X_n^{1:t-1}(d_n^{1:t-1}), Z_n]$ (capital letters) as later on in e.g. Proposition 3 we mention realizations, but $\mathbb E[X_n^t(d_n^{1:t}) | X_n^{1:t-1}(d_n^{1:t-1}) = x_n^{1:t-1}, Z_n = z_n]$ will help to establish a convention about the implicit regime invoked when referring to $x_n^{1:t-1}$, including our references for $\phi_l$ (where we do need to instantiate as $\phi_l(x_n^{1:t-1}, d_n^{1:t-1}, z_n)$ when defining $\phi_{nl}^t$ in line 109). Thank you!
Concerning your point 2, yes those are posterior parameters. They are not independent parameters of the encoder per se, strictly speaking the parameters are the ones in the MLP/GRU pipeline (lines 250-251), which then define the mean and variance of the variational Gaussian distribution in the usual amortized variational inference sense. Line 233 is the likelihood function indeed, where $f(\cdot)$ is the conditional mean as in Eq. 1 and the error variance is just another parameter we can optimize with respect to. Finally, regarding your point c), our theory can be respected by basically freezing $\phi$ after the initial $T_0$ period (in this case, $\beta$ is considered identified with respect to the learnt $\phi$ -- it's not fundamental that a different $\phi$ would imply a different $\beta$, the choice of basis $\phi$ is problem-dependent anyway). In practice though, as we say in line 258, we allow our implementation to just backprop through $\phi$ even after $T_0$ (based on preliminary experiments with different simulations), although this can be easily switched off.
Many thanks again, very useful to get suggestions on how to tweak the notation to increase accessibility. Good stuff! | Summary: This paper considers a special case of a series of interventions where the interventions are categorical and sparse. And interventions can effect later timestamp. This is a form of causal extrapolation. Authors propose to study this using a conditional mean model that utilizes basis functions and subsequently develop corresponding algorithms.
Strengths: 1. The problem is interesting and well-motivated.
2. As far as I can tell, the theory is sound.
Weaknesses: 1. Section 2.1 is really hard to parse.
2. The assumptions in section 2.2 is not explained intuitively and I don’t know how necessary they are.
3. The experiments are synthetic and semi-synthetic. Although since the nature of the paper is mostly theoretically, I wouldn’t consider that to be a huge issue.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In equation (1), do you need the lower case x to be a potential outcome?
2. Is the notation changed in 2.1? f does not appear in equation (1).
3. For CSI-VAE, why use GRU instead of more modern arch like attention or even LSTM?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The only limitation is how practical these methods are.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review, and for agreeing that **“the problem is interesting and well-motivated”**! We would like to see more of that in the community.
**Section 2.1 is really hard to parse**
We hope the following helps. Eq. 1 takes a standard tensor decomposition format. It can be motivated by a variety of starting points, for instance Taylor series approximations. Its main hyperparameter is $r$, for which there are standard ways of choosing it - see for instance [1]-[4].
In Section 2.1, we address it from two perspectives. *First*, when time is unbounded: there we assume that the true model can be constructed from the exact form in line 149, as motivated by results such as Proposition 1 of [25], which can then be transformed into the structure of Eq. 1. The reasoning is formalized as Proposition 1 in our paper. *Second*, when time is bounded: we can allow the true data generating functions to belong to a broad class of functions, showing that there exists a finite $r$ that controls the approximation error of Eq. 1 to any a priori degree. This is formalized as Proposition 2 in our paper.
In particular, separating treatment variables from the rest, as in the equation in line 135, can be motivated as follows. First, by recognizing that this is a standard way of writing a regression function with categorical inputs. For instance, if we have two categorical inputs $d_1 \in \{0, 1, 2\}, d_2 \in \{0, 1\}$, we can have the following (overcomplete) ANOVA model with parameters $\theta_{00}, \theta_{01}, \theta_{10}, \theta_{11}, \theta_{20}, \theta_{21}$, with indicator functions $f_{(d_1’, d_2’)}(d_1, d_2) \equiv I(d_1 = d_1’, d_2 = d_2’)$:
$$f(d_1, d_2) \equiv f_{(0, 0)}(d_1, d_2) \times \theta_{00} + \dots + f_{(2, 1)}(d_1, d_2) \times \theta_{21}$$
When there are other inputs beyond the categorical variables of interest, we can make each $\theta$ a function of those.
Similar constructions appear in many places, e.g. energy functions, exponential families, but also all sorts of machine learning methods - as we don’t want to span all exponentially many combinations of the discrete space. In a regression tree with categorical inputs, for instance, we don’t use all combinations, but only those that correspond to paths from the root to a leaf in some tree, with $\theta$ being the piecewise constant expected value of the output at inputs mapped to that leaf. This is a standard interpretation that appears in e.g. Hastie et al.’s *Elements of Statistical Learning*. One interpretation of what we do in Eq.1 is to use a differentiable embedding of the $d$ vector with dimension $r$, instead of $r$ combinatorial paths down a tree of $r$ leaves.
We are very happy to take follow up clarification questions on the above.
**The assumptions in section 2.2 is not explained intuitively and I don’t know how necessary they are.**
Assumption 1 allows a large enough window of time so that we can identify $\beta$ first by least squares. For least-squares to be well-posed we need that the corresponding rows in the observation matrix are linearly independent. This is what Assumption 2 states (pseudo-inverses basically mean the matrix that provides the least-squares projection).
Assumption 3 states that, when we need to learn about intervention level $d$, then we need a sufficient number of units getting that assigned, and that we leave these units unperturbed by enough time so that we don’t have further interventions getting conflated with $d$. Here, “enough time” is formalized in terms of the number of intervention parameters $k_d$.
Assumption 4 is analogous to Assumption 2, now applied to the identification of $\psi$: having identified $\beta$ and fixing $\phi$, this will become another least-squares problem where again we need an explicit assumption about the rank of the matrix of observations.
Let us know whether the above is helpful.
**The experiments are synthetic and semi-synthetic. Although since the nature of the paper is mostly theoretically, I wouldn’t consider that to be a huge issue.**
That was our reasoning too. We believe that the main point is to call the attention that off-the-shelf black-boxes may not be the wisest option here (a point we want more people to appreciate), so keeping the benchmark to be as uncluttered and controllable as possible was one of our main objectives. We nevertheless provide updated experiments in the shared rebuttal box.
**In equation (1), do you need the lower case x to be a potential outcome?**
At least implicitly it is necessary, since past $X$s must be exposed to the levels of past treatments as the future $X$s. We are not against other pieces of notation, such as having a single $do(d_n^{1:t})$ operator to indicate a single-world set of interventions. Sometimes it is slightly more convenient to have the potential outcome notation, as in the text of Assumption 4 where we refer to $\phi$ as explicitly generated at the specific intervention level $d_n^{1:t + t’ - 1}$, but it has some disadvantages too (such as a heavy syntax).
**Is the notation changed in 2.1? f does not appear in equation (1).**
It is changed to emphasize that in 2.1 we are talking about purely generic function representations - but yes, it is motivated to eventually have the shape of Eq. 1. In an eventual camera-ready version, we will modify the start Section 2.1 to provide a short summary of what will follow to facilitate reading.
**For CSI-VAE, why use GRU instead of more modern arch like attention or even LSTM?**
GRU was actually introduced several years after the LSTM as a way of simplifying it for applications where a full-blown LSTM was an overkill. We considered GRU to be better suited to our benchmarking, as we were not dealing with very long sequences and it was already overfitting - so little to be gained (and, actually, opening up the possibility of doing worse) by falling back into a LSTM, not to mention transformers.
Thank you again for your time and feedback!
---
Rebuttal Comment 1.1:
Comment: Thanks for you reply! I don't have any further question at this point.
---
Reply to Comment 1.1.1:
Title: Closing comments
Comment: Thanks again for your time and engagement. We hope we have addressed all questions about clarify and experiments. | Summary: This paper studies treatment effect estimation under sequential, discrete interventions.
It is assumed that all treatments are fully independent and impact all future outcomes.
It studies the problem of "causal extrapolation" where some combinations of sequential interventions may not have been observed during training.
Parametric assumptions on the causal mechanisms are made and used for generalising to unseen interventions.
A matrix factorisation approach is developed to fill in the gaps for the unseen interventions.
The approach is empirically tested on synthetic and semi-synthetic data.
Strengths: - This paper studies an important and, in my opinion, an understudied problem in causality: causal extrapolation, as it is called in the paper. Many papers look at assumptions that can be made on the causal structure alone, whereas here, the functional form of the structural equations is considered as well.
- The paper studies a way to generalize over unseen combinations of interventions. That is an important concern, especially in the time series context.
Weaknesses: - The mathematical formalisation is very difficult to follow. Wild statements are made without properly introducing the objects, motivating the statement or clarifying what assumptions are made in order for the statement to hold (see questions below). It is unclear which parts follow from the assumption of the causal relationships in Fig. 1 and what additional assumptions or approximations are made.
- The assumptions on the data generating process are too restrictive for the method to have much hope of being applied to real data (see limitations).
**Minor:**
- L21 and L 86: Typo "an unit"
Technical Quality: 2
Clarity: 1
Questions for Authors: - Eq. 1: Where does this come from? Can any treatment effect that follows the causal structure in Fig. 1 be expressed like this? Or is this a simplifying modelling assumption?
- L136: What are ANOVA models?
- L123: What does the last sentence in this paragraph mean?
- Equation in L135: What does this function correspond to? The equation seems not to hold for general functions: e.g. if the dimension of the two RHS functions is 1, then this is certainly not a statement that is true in general. Under what conditions can you decompose a function like this? What are the terms on the RHS? How do they fit into the initial problem setting?
- Prop. 3: Does this assume that equation 1 holds?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: - Fig. 1: The assumed causal structure severely limits the applicability of the presented approach. There is no allowed confounding and, more importantly, the interventions are assumed to be independent. The latter assumption would require full randomisation during data generation. It does simplify things a lot, since it avoids having open paths through conditioning on colliders. But, in essence, it is assumed that the data comes from an RCT. Therefore, the contribution is applicable to RCTs that do not cover the full range of combinatorial interventions.
- The identifiability Assumption 1 means that we essentially have full access to probe each unit (we need to have the "no intervention" applied for several time steps). That is, we essentially can perform an RCT on all units. In that case, why not keep the units in the lab, rather than predicting what will happen under future interventions?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review, and for the much appreciated point that the problem is **“important and, … an understudied problem in causality”**, which is one of the main messages we wanted to convey! We address all clarification questions below.
**Eq. 1: Where does this come from?**
This is an excellent point, which is why we dedicated the entire Section 2.1 of our paper solely about the representation power of Eq. 1. For a detailed answer to your question, please consult it. Here is a summary.
Eq. 1 takes a standard tensor decomposition format. It can be motivated by a variety of starting points, e.g. Taylor series approximations. Its hyperparameter is $r$, for which there are standard ways of choosing it, see e.g. [1]-[4].
We address it from two perspectives. *First*, when time is unbounded: there we assume that the true model can be constructed from the exact form in line 149, as motivated by results such as Proposition 1 of [25], which can then be transformed into the structure of Eq. 1. The reasoning is formalized as Proposition 1 in our paper. *Second*, when time is bounded: we can allow the true data generating functions to belong to a broad class of functions, showing that there exists a finite $r$ that controls the approximation error of Eq. 1 to any a priori degree. This is formalized as Proposition 2 in our paper.
**L136 : ...ANOVA...?**
Analysis of variance. They are the workhorse of analysis of experiments and widely taught across applied sciences.
**L123: … last sentence in this paragraph…**
$k_d$ is the number of parameters associated with a particular level $d$. The symbol was already used in line 114 to describe the time-bounded model. It does not appear explicitly in the time-unbounded model. Since it has three parameters, we define $k_d = 3$, so that we can refer to it later in e.g. Assumption 3 without having to mention which of the intervention models is used.
**Equation in L135**
It is a standard way of writing a regression function with categorical inputs. For instance, if we have two categorical inputs $d_1 \in \{0, 1, 2}; d_2 \in {0, 1}$, we can have the following (overcomplete) ANOVA model with parameters $\theta_{00}, \theta_{01}, \theta_{10}, \theta_{11}, \theta_{20}, \theta_{21}$, with indicator functions $f_{(d_1’, d_2’)}(d_1, d_2) \equiv I(d_1 = d_1’, d_2 = d_2’)$:
$$f(d_1, d_2) = f_{(0, 0)}(d_1, d_2) \times \theta_{00} + \dots + f_{(2, 1)}(d_1, d_2) \times \theta_{21}$$
When there are other inputs beyond the categorical variables of interest, then each $\theta$ can be a function of those.
This appears in many places, e.g. energy functions, exponential families etc., but also all sorts of machine learning methods - as we don’t want to span all exponentially many combinations of the discrete space. In a regression tree with categorical inputs, for instance, we don’t use all combinations, but only those that correspond to paths from the root to a leaf in some tree, with $\theta$ being the piecewise constant expected value of the output at inputs mapped to that leaf. This is a standard interpretation that appears in e.g. Hastie et al.’s *Elements of Statistical Learning*. One interpretation of what we do in Eq.1 is to use a differentiable embedding of the $d$ vector with dimension $r$, instead of $r$ indicator functions given by a tree of $r$ leaves.
**Prop. 3: …equation 1…?**
It does assume that Eq. 1 holds (what would $\beta$ mean otherwise?) The entire Section 2.2. refers to symbols introduced in Eq. 1.
We therefore argue that the clarifications above fully address weakness #1, “The mathematical formalisation...”
**Fig. 1: The assumed causal structure severely limits the applicability … RCT on all units.**
Thanks for raising these points, which quite possibly could come from a few other interested readers too, so it’s very useful to put them to rest.
The structure in Fig. 1 is far more flexible than the standard structure for sequential interventions: it allows interventions to have direct effects indefinitely into the future.
The structure in Fig. 1 does allow confounding: it is only depicting what happens when each $D_t$ is controlled. This is the standard graphical way of denoting an intervention in a random variable $D_t$ (e.g, [22, 32]): wipe the edges into it, adopt a different symbol (here, a square) to indicate that’s a fixed index, not a random variable anymore. This structure is compatible with a non-manipulated graph where the entire past causes $D_t$. It is ill-posed to say that “interventions are assumed to be independent”: intervention variables are not random variables, the concept of probabilistic independence does not apply to them.
We do allow for observational data: the method is introduced *as if* we are allowed to *interpret* each $D_t$ as controlled. To quote line 39,
*“We consider the case where each $D_n^t$ behaves **as if** it was randomized…”*
For avoidance of doubt, in lines 81 and 321:
*"sequentially unconfounded with the system by randomization* **or assumption**"
*“We can rely on standard approaches of sequential ignorability [35] to **justify our method in the absence of randomization**.”*
See also lines 526-532 (Appendix A).
That is, *nothing in our results change at all* if confounding can be blocked by observables, e.g., if in the non-intervened graph we have background variables $Z$ pointing to all $D$, and past $D$ variables pointing to all future $D$ variables (we do not explicitly consider the case of variables $Z^t$ in between variables $D^t$, $D^{t + 1}$, but they can be absorbed into $\phi$ and the do-calculus is still there for us to use.) That is, we can still identify functionals such as Eq. 1, and the conformal prediction results in Section 3.2 do not assume $D$ is randomized. We can make these points more explicitly in the paper, this is a good idea.
We therefore argue that the clarification above fully addresses weakness #2, “The assumptions….”
Thanks again for the questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for giving a detailed rebuttal. I have read the other reviews, rebuttals and had another look at the paper and will try to answer to this rebuttal below.
## Eq. 1
So are you saying that the RHS of Eq. 1 is an estimator for the LHS?
The equal sign would suggest that this is a statement about the true data generating process rather than a way to estimate the expectation.
## Fig. 1
I'm afraid the explanation did not resolve the confusion.
In L39 you say “We consider the case where each $D_n^t$ behaves as if it was randomized…”, a similar quote in L81, which you provide.
But in your rebuttal you say that confounding between action variables is allowed, but not preceding action variables causing future actions (and I suppose no confounding between actions and outcomes).
Why did you make the assumption of randomization in the first place? In other words, which results in the paper do not hold when you do not make the assumption of randomization?
---
Reply to Comment 1.1.1:
Title: Re: Official comment
Comment: Thank you very much for the reply. We hope the following addresses the remaining follow-up questions.
**Eq. 1: So are you saying that the RHS of Eq. 1 is an estimator for the LHS? ...**
Apologies, but we are genuinely confused where this conclusion is coming from. We don't see anything in our rebuttal that suggests that, and it would be helpful to have the quote of the passage leading to it. Eq. 1 is indeed a "a statement about the true data generating process".
**Fig. 1: ...in your rebuttal you say that confounding between action variables is allowed, but not preceding action variables causing future actions...**
Again, we don't see anywhere in our rebuttal a statement that $D$ variables cannot be causing others. In fact, we explicitly say the opposite ("*...nothing in our results change at all if ... past $D$ variables pointing to all future $D$ variables.*"). Perhaps it's useful to recall the difference between interventions and random variables within our context.
In a fully connected DAG according to ordering $(D_1, X_1, D_2, X_2)$ where $D_1$ and $D_2$ are random instead of controlled, we have the edges
$D_1 \rightarrow X_1, D_1 \rightarrow X_2, D_1 \rightarrow D_2$, $X_1 \rightarrow D_2$, $X_1 \rightarrow X_2$, $D_2 \rightarrow X_2$.
When $D_1$ is controlled to $d_1$ and $D_2$ to $d_2$, one graphical characterization is
$d_1 \rightarrow X_1, d_1 \rightarrow X_2$, $X_1 \rightarrow X_2$, $d_2 \rightarrow X_2$,
where lower case here indicates that we are talking about exogenous fixed indices (squares in Fig. 1).
There isn't anything else to be said if $d_2$ is functionally independent of the past, but even then this is not at all an issue. This is because whether $d_2$ is a function or not of $(d_1, X_1)$, it won't affect identification strategies such as the sequential back-door/g-formula see e.g. chapter 4 of [32]. We can still carry $d_2$ symbolically into any averaging over the past, even if $d_2$ is functionally related to $(D_1, X_1)$ and we treat $D_1$ as uncontrolled and average over it (since we don't average over the past, this point is redundant anyway). Other representations such as SWIGs suggest adding edges between fixed indices with functional dependencies see e.g. Chapter 19 of [22], but if we were to adopt SWIGs in Fig. 1 the diagram would become an incomprehensible mess. Technically speaking, even edges like $d_1 \rightarrow X_1$ are "unnecessary", as the graphical model is meant to represent the independence structure of a distribution over random variables, and $d_1$ isn't one (preserving "$d_1 \rightarrow \dots$") saves us from having to label the nodes as $X_1(d_1)$ etc.).
To summarize, our Fig. 1 is a cosmetic choice among other plausible choices and exemplifies already the case of non-dynamic regimes with no ambiguity. There wasn't really much of a deep point we were trying to make (other than not requiring any Markovian assumptions connecting past and future), and we are earnestly surprised that this is raising a discussion...
**Why did you make the assumption of randomization in the first place?**
This was just a way of saying that we assume to have access to the distribution of (single-world) potential outcomes, where our predictions lie. Whether we obtain it by (say) controlled experiments, sequential ignorability, proxies, instrumental variables etc. is orthogonal to our main results. We thought readers would appreciate if we focused on the main novel aspects of our contribution. We are happy to make this point more explicitly in the introduction.
We really appreciate your engagement, and we hope the above has been helpful.
---
Rebuttal 2:
Title: Thank you: closing comment
Comment: Thank you for your assessment. As the discussion started from our paper, we will close it, showing our disagreement,
* "Eq. 1 is a reasonable approach" Eq. 1 follows a matrix-factorisation approach that is well-received by the community. We are still not sure where the misunderstanding is coming from, or how the reviewer initially missed the discussion in Section 2.1, or the source behind statements such as "Eq. 1 as an estimator" following our rebuttal. Perhaps in the same way the reviewer is unfamiliar with ANOVA, they are also unfamiliar with the long history behind structures such as Eq. 1.
* Predictions under control of a full sequence of actions is a fundamental problem of control theory, dynamic treatment regimes, reinforcement learning, among other fields, including the multi-million user company which gave us feedback on the writing of this paper. It's a literature which we assume our readers feel comfortable with. Once again, we repeat that we do not require the data itself to follow actual randomized trials.
Unfortunately, besides the itemized questions in the review which were responded to in detail above and cross-referenced with the literature and passages in the paper, we do not believe we got any piece of concrete advice that would allow us to address the subjective judgement call of "Wild statements are made without properly introducing the objects, motivating the statement or clarifying what assumptions are made in order for the statement to hold". As such, we would like to leave our own judgement that this statement is unsubstantiated.
To end on a positive note, we are mindful that reviewing is a volunteer, time-consuming job, we are still genuinely thankful for the time and engagement of the reviewer. | Summary: The authors propose CSI-VAE, a method that solves the problem of forecasting potential outcomes in multiple interventions. In particular, the authors seem to propose a solution to the problem of a large (combinatorial) space of future treatment plans using the controlled past treatment sequences for inference.
Strengths: STRENGTHS
* The paper is well structured, written, and generally easy to follow.
* Including an automated (distribution free) uncertainty quantification using CP is a nice to have and will definitely improve adoption into practice. in particular in medical applications
* Focusing on identification in this tricky area is very welcome and I would encourage other authors to also include at least a discussion on identification, thank you for this
Weaknesses: WEAKNESSES
* The problem sounds fairly general which would allow other treatment effects over time models to also be relevant to discuss and in particular benchmark against. Currently, it seems the authors only benchmark against ablations of CSI-VAE and naive methods using GRU? Am I mistaken?
Technical Quality: 3
Clarity: 3
Questions for Authors: I find the paper to be quite well done, my only remark is wrt the benchmark settings which I would like to see addressed in a rebuttal.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for finding that our paper is **“quite well done”** and **“generally easy to follow”**.
The question about the benchmark is a good opportunity for further clarifications. In order to keep the paper focused, we stripped away complementary discussions about dealing with confounding to directly address the value of taking a structured approach to causal extrapolation. In the benchmark, this means asking directly whether a black-box method can learn to smooth the gaps in the data by itself, without the structure we build. Several methods could be used. We focused on the GRU because it is a well established method for learning sequential predictions out of categorical input sequences - more flexible than RNNs, less complex than LSTMs or transformers, which would be major overkills: we already overfit with GRUs, a LSTM or beyond would not help (and did not, in preliminary experiments). Autoregressive linear models were also attempted in simpler initial versions of the dataset, but they were already underfitting badly and we did not report them.
We added further methods (LSTMs, transformers) as requested to the shared rebuttal box. Code has been updated (not uploaded yet, we don't think we are allowed to), and it basically replaces calls to GRU with calls to other methods within the PyTorch library. As we anticipated and had seen in preliminary runs, these more complex methods didn't really add much of substance to our central thesis.
Thanks again for your feedback, and happy to take further questions!
---
Rebuttal Comment 1.1:
Title: Closing comments
Comment: Thanks again for your review and we hope we have addressed the question about benchmarks - including the easiness by which further comparisons can be added to our codebase. | Rebuttal 1:
Rebuttal: We take this opportunity to once again thank all reviewers for their time and suggestions! Individualized answers have been provided to each of you.
We use this space to report an update on experiments. To summarize the context, we chose the GRU family merely as an illustration of a modern black-box method that does sequential prediction with representation learning. We didn't think the message would be materially any different if we used a classical RNN, or LSTM, or transformers: the GRU itself was designed to be a good compromise between RNNs and more complex architectures.
In any case, it may be simpler to just run the experiments than argue the above (although we do stress that the main point is structured vs black-box, we never cared whether it was GRU or something else). Since it is straightforward to replace GRUs with other related methods in PyTorch, we did just this. Below are updates including LSTMs and transformers. Autoregressive linear models were also attempted in simpler initial versions of the dataset, but they were already underfitting badly and we do not report them.
* Fully Synthetic
| Model | T+1 | T+2 | T+3 | T+4 | T+5 |
| :----------- | :------: | :------: | :------: | :------: | -------: |
| CSI-VAE-1 | $36.53$ | $41.46$ | $41.73$ | $41.12$ | $41.32$ |
| CSI-VAE-2 | $97.80$ | $118.25$ | $117.79$ | $127.25$ | $135.03$ |
| CSI-VAE-3 | $138.78$ | $164.02$ | $141.71$ | $132.59$ | $125.55$ |
| GRU-0 | $229.72$ | $269.66$ | $220.95$ | $208.30$ | $188.43$ |
| GRU-1 | $230.76$ | $270.83$ | $220.93$ | $208.33$ | $184.92$ |
| GRU-2 | $93.73$ | $101.03$ | $118.01$ | $88.53$ | $132.28$ |
| LSTM | $114.71$ | $126.65$ | $137.12$ | $105.22$ | $137.19$ |
| Transformer | $111.66$ | $122.08$ | $150.57$ | $175.84$ | $87.89$ |
* Semi-Synthetic Spotify
| Model | T+1 | T+2 | T+3 | T+4 | T+5 |
| :----------- | :------: | :------: | :------: | :------: | -------: |
| CSI-VAE-1 | $68.23$ | $82.94$ | $83.53$ | $81.97$ | $79.63$ |
| CSI-VAE-2 | $253.85$ | $312.53$ | $305.08$ | $303.68$ | $302.83$ |
| CSI-VAE-3 | $757.94$ | $937.07$ | $800.55$ | $704.66$ | $634.72$ |
| GRU-0 | $215.42$ | $260.65$ | $193.41$ | $137.20$ | $117.06$ |
| GRU-1 | $223.61$ | $269.69$ | $205.91$ | $141.53$ | $126.36$ |
| GRU-2 | $154.18$ | $187.42$ | $177.96$ | $133.36$ | $127.58$ |
| LSTM | $130.35$ | $156.02$ | $133.28$ | $94.35$ | $85.92$ |
| Transformer | $133.42$ | $157.66$ | $154.61$ | $164.70$ | $158.03$ |
The attached pdf with box plots provides a visualisation of the above.
---
## Code modifications
Our code will also be updated accordingly (the changes are very localized, basically replacing calls for GRU with other methods already are available in PyTorch). Changing from GRU to LSTM boils down to commenting a line and adding another:
```
#self.rnn_z_x = nn.GRU(self.z_dim+1, hidden_dim, self.num_layers, batch_first=True)
self.rnn_z_x_d = nn.LSTM(self.z_dim+1+hidden_dim, hidden_dim, self.num_layers, batch_first=True)
```
For transformers, we need to first define a transformer encoder with masking.
```
import torch
import math
class PositionalEncoding(nn.Module):
def __init__(self, hidden_dim, max_len=5000):
super().__init__()
self.hidden_dim = hidden_dim
pe = torch.zeros(max_len, hidden_dim)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, hidden_dim, 2).float() * (-math.log(10000.0) / hidden_dim))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
# x of size B, T, H
B, T, _ = x.size()
# print(x.size())
# print(self.pe.size())
pe = torch.permute(self.pe, (1, 0, 2))
# print(self.pe.size())
x = x + pe[:, :T, :].expand(B, -1, -1)
# print(x.size())
# x += self.pe
return x
class TransformerTimeSeries(nn.Module):
def __init__(self, hidden_dim, nhead, num_layers=1):
super().__init__()
self.hidden_dim = hidden_dim
self.pos_encoder = PositionalEncoding(hidden_dim)
encoder_layers = nn.TransformerEncoderLayer(hidden_dim, nhead, dim_feedforward=hidden_dim*4, batch_first=True)
self.transformer_encoder = nn.TransformerEncoder(encoder_layers, num_layers)
def generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def forward(self, x):
'''
input:
x of size (B, T, hidden)
output:
x_T of size (B, T, hidden)
'''
x = self.pos_encoder(x)
_, T, _ = x.size()
tgt_mask = self.generate_square_subsequent_mask(T).to(x.device)
# print(tgt_mask)
h = self.transformer_encoder(x, mask=tgt_mask, is_causal=True)
return h
```
and then modify the call as in
```
# self.rnn_z_x = nn.GRU(self.z_dim+1, hidden_dim, self.num_layers, batch_first=True)
self.rnn_z_x_d = TransformerTimeSeries(self.z_dim+1+hidden_dim, nhead=8)
```
Many thanks again!
Pdf: /pdf/ee08fa3a4e686fe29a466ca9ac35fa639cbee57d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLM Dataset Inference: Did you train on my dataset? | Accept (poster) | Summary: The paper addresses the limitations of traditional membership inference attacks in identifying if specific text sequences belong to the training data of LLMs. The authors highlight the inadequacies of MIAs and propose a novel dataset inference method. This method focuses on detecting entire datasets used in model training, rather than individual strings, combining multiple MIA metrics to accurately distinguish between training and test datasets with statistically significant results and no false positives. This approach promises to enhance the legality and ethicality of training LLMs.
Strengths: 1. The paper provides a compelling argument for shifting the focus from individual string-based MIAs to dataset-based inference methods.
2. The paper is very well written
Weaknesses: 1. The author demonstrates the failures of MIA methods by assessing them on the training (member) and validation (non-member) splits of the Pile dataset. However, it is unclear whether the validation set is thoroughly decontaminated from the training data. The deduplication method used in the original Pile validation data is quite loose. There is a potential issue that non-member examples might still share high n-gram overlap with the member examples, complicating MIA effectiveness.
2. The proposed approach may lack sufficient novelty: it essentially builds on existing MIA methods in two main ways: (1) by combining various MIA metrics into an ensemble and (2) by extending their application to the distribution of examples. These modifications appear to be straightforward extensions of current MIA methods.
3. The proposed method requires validation data sampled from the same distribution as the test data. However, obtaining such labeled data may not be very practical
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and are happy to see that you found our work to pose a compelling argument for transitioning toward Dataset Inference, and found the paper well-written. We acknowledge your concerns and attempt to respond to them line by line below:
### Re: Decontamination of Validation Set
Thank you for your comment. We understand the concern regarding potential contamination in the validation set. To address this, we have conducted additional decontamination using a stricter deduplication method that ensures minimal overlap between the training and validation sets. Specifically, we applied a more rigorous n-gram filtering process to eliminate any shared sequences between these sets. Here are a few key points regarding decontamination:
1. **Reference to the Pile Paper**: The original Pile paper ([Gao et al., 2021](https://arxiv.org/pdf/2101.00027)) acknowledged challenges with deduplication and contamination between training and validation sets. Despite their efforts, complete decontamination is difficult to achieve, as some overlap in n-grams is inevitable because of domains being identical (for example, arxiv headers).
2. **Impact of Contamination on Dataset Inference**: If there were any contamination, it would only make dataset inference harder, as the validation set would resemble the training set more closely. Therefore, high n-gram overlap should lead to worse, not better, results. However, our method succeeds in distinguishing between training and validation datasets despite these challenges, indicating the robustness of our approach at solving an even harder problem.
3. **N-gram Overlap**: We agree with your point about n-gram overlap. A concurrent paper ([Duan et. al., 2024](https://arxiv.org/pdf/2402.07841)) independently found that there is approximately a 30% overlap in n-grams between Wikipedia and arXiv datasets. This study also indicated that non-members with lower n-gram overlap are more distinguishable by existing MIAs. This supports our claim that even with potential contamination and the inherent challenges of deduplication, our dataset inference method remains effective.
### Re: Novelty of the Proposed Approach
We appreciate your observation regarding the novelty of our approach. Here are a few key points that highlight the innovative aspects of our method:
1. **Achieving a Previously Near-Impossible Goal**: We make the previously near-impossible goal of membership inference (MI) achievable. This is a significant feat that opens up new possibilities in the field of detecting training data.
2. **Learning which MIAs to Use**: Our method involves "learning" which MIAs to use, a critical step that enhances the accuracy and effectiveness of our approach. This was crucial to overcome the biggest technical challenge of our work, that is, most MIAs were **worse than random**.
3. **Operationalizing the Framework**: Developing the framework required that we address practical considerations, such as how to handle victims who naturally retain drafts or IID sets of their work.
In summary, the main contribution of our work is not just in the specific techniques but in proposing a comprehensive framework and advocating for a shift towards dataset inference as a field of study ripe for modern day generative models.
### Re: Practicality of Obtaining Labeled Validation Data
We acknowledge that obtaining labeled validation data sampled from the same distribution as the test data can be challenging. Indeed, this is the number one open problem that our work creates for future research. That said, a significant part of this work was spent in formalizing a framework. Here are quite a few scenarios where the IID setting should roughly happen in practice:
- **Editorial Process**: Thousands of NYT articles reach the editor's desk but never get published. These unpublished articles are likely to be more IID compared to two random splits of the Pile dataset.
- **Book Drafts**: This is also very common with book writers when working on a chapter draft. These drafts would be more IID than random splits of the Pile, in our opinion.
However, we truly need a test of IIDness rather than making claims of the same. This would be a really nice (but also hard) area of research for the future. One strong baseline for the same is the recent “Blind MIAs” work by Das et. al. https://arxiv.org/abs/2406.16201. If any pair of datasets can be distinguished based on these blind membership inference attacks, they should be deemed **not** IID. Future work can improve upon this.
Some potential directions for bypassing the IID problem in future work also include:
1. **Using Synthetic Data**: Create IID sets artificially using synthetic data generation techniques.
2. **Storing Hashes**: Creators may store hashes of IID data at the time of publication to ensure data integrity.
3. **Proactive Storage**: More proactive work to store generations of artistic work rather than only the final stage.
---
Once again, we thank you for the constructive feedback on our work. Working on the pointers has helped us improve the quality of our analysis. We look forward to further discussions and improvements in this evolving field.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response! I appreciate the additional details you provided. I will keep my score unchanged | Summary: Large language models are trained on a vast amount of online available data, which has lead to copyright and privacy issues (e.g. New York Times vs. OpenAI, as pointed out by the authors). There are various methods that try to identify if a given data point x was used to train a large language model.
This paper presents a very systematic analysis of these methods based on the Pythia suite of models.
The conclusions of the paper is, that current methods are not well-suited to determine if specific data was included in a training set.
Strengths: The authors evaluate six metrics in the main part of the paper and a very large number of additional metrics and variations of the initial six in the appendix. Furthermore, the authors test these metrics on a large number of data sets, spanning different domains (e.g. github and wikipedia, which I would consider very different).
The analysis is conducted on the Pythia suite, which means that the authors had full access to all relevant information, in particular, the training data and training methodology. As a result, they had access to the ground truth.
The authors also state, that they will release their code, once the reviewing process is completed and anonymisation is no longer required.
Weaknesses: It would be interesting to see, how the evaluation generalises to commercially available models, such as Gemini and GPT. This could be done e.g. with Wikipedia-Articles.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please explain, why you did not conduct any tests with commercially available models, such as GPT-4, Claud3, etc.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations were discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and are happy to see that you found our analysis systematic and our paper technically solid with a high impact, along with an overall positive assessment of the work. We acknowledge your concerns and attempt to respond to them line by line below:
### Re: Generalization to Commercially Available Models
We recognize the interest in seeing how dataset inference generalizes to commercially available models such as GPT-4 and Claude. Here are a few points that limited the same, but are also great fuel for future work:
1. **Lack of Blackbox MIAs**: In our work, we leverage grey-box access to the models, that is, we assume that the membership inference attacks have access to the loss values from the underlying model. Since MIAs in LLMs is a relatively nascent field, the research has not yet unlocked successful black-box MIAs, unlike the vision space where we have seen label only MIAs (eg. [Choo et. al.](https://arxiv.org/abs/2007.14321)). We believe future work can unlock this possibility with API access as well.
2. **Access Limitations**: Conducting tests with commercially available models is challenging due to limited access to their training data and methodologies. Unlike the Pythia suite, where we had full access to all relevant information, commercial models do not provide the same level of transparency, making it difficult to obtain the ground truth for evaluation. In particular, the presence of ground truth labels for Pythia models allows us to validate that dataset inference is indeed successful in distinguishing members and non-members.
3. **Alternative Evaluation Methods**: We suggest that future research could explore alternative evaluation methods that do not rely on access to the full training/validation data. For example, researchers could use synthetic data or develop new techniques that infer properties of the training data from the model's outputs. The suggestion of temporally shifted Wikipedia articles would not work in this case because it would give a *false sense* of success as also noted in concurrent works of [Das et. al.](https://arxiv.org/abs/2406.16201) and [Duan et. al.](https://arxiv.org/pdf/2402.07841).
Once again, we thank you for the constructive feedback on our work. Working on the pointers has helped us improve the quality of our analysis. We look forward to further discussions and improvements in this evolving field. Please let us know if we can address any remaining concerns.
---
Rebuttal Comment 1.1:
Title: Answer
Comment: Thank you very much for the rebuttal. I am looking forward to the final version of the paper and any follow-up work on this important issue. | Summary: In this paper, the authors investigate the commonly used membership inference evaluation for LLMs and find that previous attacks primarily detect features related to temporal changes, performing poorly under real IID scenarios. To address the challenge of individual sample membership inference attacks, the authors propose a novel threat model: LLM dataset inference. The proposed method can accurately infer whether a set of data points was used in the training process or not.
Strengths: - The paper is well-written, and I really enjoy reading it.
- The "Failure of Membership Inference" section is really inspiring, and it's very important to the future LLM MIA area.
The proposed dataset inference method is very simple but effective. The results look very promising.
Weaknesses: - The experiments are conducted on a single series of models, rather than on various models trained with different datasets, algorithms, or even seeds. I think this is a little picky since it's not cheap to retrain a bunch of large models, but I do think it's important for reliable evaluation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the victim choose the suspect set? What if the suspect set is a mix of member and non-member data points? This might happen when the suspect intentionally trains the model on a subset of the dataset.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention the limitations in the appendix, which I appreciate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback and are happy to hear that they enjoyed reading our paper.
>**The experiments are conducted on a single series of models, rather than on various models trained with different datasets, algorithms, or even seeds. I think this is a little picky since it's not cheap to retrain a bunch of large models, but I do think it's important for reliable evaluation.**
We used the Pythia suite of models, trained on the PILE dataset, since it is unique in providing complete access to both the training dataset and the training procedures. This transparency is crucial as it allows for rigorous and replicable experimentation, ensuring the validity and reliability of our results. Without the information provided along the Pythia suite of models, we cannot reliably know if a point was part of the pretraining set. This could for instance lead us to overestimate the performance of our method, as was the case for prior work. To the best of our knowledge, no other set of LLMs offers the level of accessibility that Pythia models do (except the very recent [Olmo model series](https://github.com/allenai/OLMo) which only came out after we finished our experimentation). Our work is a call to action for the community to provide more model releases with comparable levels of transparency.
>**How does the victim choose the suspect set? What if the suspect set is a mix of member and non-member data points? This might happen when the suspect intentionally trains the model on a subset of the dataset.**
In general victims can determine suspect sets based on behaviour from a model that is suggestive of having trained on their content. This could include yielding substrings from their novels, or blogs etc. The adulteration of training with only a part of the member data is an interesting consideration. We can reframe the question as follows:
> Assume distributions $ A $ and $ B $ are distinguishable based on a t-test. You can assume they are Gaussian with separate means. Now, an adversary adulterates $ A $ with $ x\% $ of $ B $. Will the t-test succeed in distinguishing the adulterated distribution from $ B $?
To determine if the t-test will succeed in distinguishing the adulterated distribution $ A' $ (where $ A' $ is $ A $ mixed with $ x\% $ of $ B $) from $ B $, we need to consider how the adulteration affects the statistical properties of $ A' $.
1. **Let us assume that the dataset inference scores from stage 3 follow a Gaussian in the original distributions**:
- $ A \sim \mathcal{N}(\mu_A, \sigma_A^2) $
- $ B \sim \mathcal{N}(\mu_B, \sigma_B^2) $
- The t-test can distinguish between $ A $ and $ B $, implying the means and variances are sufficiently different.
2. **Adulterated Distribution**:
- Let $ A' $ be the new distribution obtained by mixing $ A $ with $ x\% $ of $ B $. This means $ A' $ is a mixture of $ (1-x)\% $ of $ A $ and $ x\% $ of $ B $.
- The mean of $ A' $ is:
$
\mu_{A'} = (1 - x) \mu_A + x \mu_B
$
3. **t-test on $ A' $ vs. $ B $**:
- For $ A' $ and $ B $ to be distinguishable, the t-statistic needs to be large enough:
$
t = \frac{\mu_{A'} - \mu_B}{\sqrt{\frac{\sigma_{A'}^2}{n_{A'}} + \frac{\sigma_B^2}{n_B}}}
$
4. **Effect of Adulteration**:
- As $ x $ increases, $ \mu_{A'} $ moves closer to $ \mu_B $.
- For small $ x $, $ \mu_{A'} $ is still relatively close to $ \mu_A $, and the t-test might still distinguish $ A' $ from $ B $.
- For large $ x $, $ \mu_{A'} $ approaches $ \mu_B $, making $ A' $ and $ B $ indistinguishable by the t-test.
The t-test will likely succeed for small $ x $ and fail for large $ x $. The exact threshold of $ x $ depends on the values of $ \mu_A $, $ \mu_B $, $ \sigma_A $, $ \sigma_B $, and the sample sizes $ n_{A'} $ and $ n_B $.
Increasing the sample size $ n_{A'} $ can improve the power of the t-test, making it possible to detect smaller differences between $ \mu_{A'} $ and $ \mu_B $.
---
Once again, we thank you for the constructive comments. Working on them has helped us improve the quality of our paper. We look forward to further discussions to clarify any concerns that remain.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I believe this paper is in good shape. Therefore, I keep my score positive. | Summary: The paper addresses the challenge of identifying training data in large language models (LLMs) with rising concerns over privacy and copyright violations. As it has previously been studied, traditional membership inference attacks (MIAs), which determine if individual text sequences were used in training, are often flawed due to temporal shifts in data. The authors propose a novel dataset inference method that focuses on identifying entire datasets rather than individual data points, reflecting real-world copyright scenarios where authors' works, like books, are used without permission. Their method combines multiple existing MIA metrics to identify training datasets effectively, achieving statistically significant results with p-values less than 0.1 and minimizing false positives. The paper highlights the method's robustness through extensive testing on the Pythia models and the Pile dataset, providing a more accurate approach to dataset attribution and addressing the shortcomings of prior MIAs.
Strengths: 1. The paper is well-structured, with clear problem statement, explanations of the methods, experiments, and findings. Figures and tables are effectively used to illustrate key points, making the complex subject matter more accessible.
2. Their method is extensively tested on Pythia and the Pile dataset, showing its effectiveness. The use of statistically significant p-values and the absence of false positives strengthen the results.
Weaknesses: 1. The concerns about the accuracy and effectiveness of existing MIA methods have been previously studied in depth in papers such as https://arxiv.org/pdf/2402.07841. There are existing publications which have highlighted the weaknesses in empirical results that detect temporally shifted member/non-member classes.
2. The computational complexity of the proposed algorithm is not addressed in this work. Authors need to provide a detailed quantitative comparison (in terms of algorithm runtime or the number of tokens processed) for their proposed method. Is this method feasible for models with more parameters (70B or higher)?
3. Although authors have studied the Pile Dataset on Pythia extensively, their experiments are just limited to this setting. There is not much insight about how this proposed method generalizes to other models and datasets. Also, I believe that the fact this method only applies to datasets (and not single strings of text) is a limiting factor. The existence of a suspect and a validation subset is not realistic in all cases.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Authors mention they have used 4 A6000 GPUs for experiments. How long it takes for the algorithm to run and test the Pile dataset? What is the runtime complexity for different dataset and model sizes?
2. How does the method handle cases where the IID assumption for suspect and validation sets does not hold? Are there any strategies in place to deal with non-IID data distributions?
3. What are the potential trade-offs in terms of performance and accuracy?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and feedback. We are happy to see that you appreciated the method’s robustness and overall quality of the draft. We address the individual concerns below one by one:
### **Re: Other work studying the failure of MIAs (W1)**
We would like to preface this answer by adding that the linked paper was truly a concurrent work to this submission. Crucially, our work not only shows the limitations of MIAs, but goes beyond that by proposing a comprehensive framework to detect if a given dataset (not string) was used in the training of an LLM, a shift that is particularly relevant for modern generative models.
We would like to note that we indeed also duly acknowledged this concurrent work in our submission on lines 187 and 188: “A similar question was concurrently asked by Duan et al. [15], who independently showed that MIAs are only successful because of the temporal shift in such datasets.”
### **Re: Computation Complexity (W2, Q1, Q3)**
The computational complexity of the method is relatively low, and roughly equivalent to the computation cost of performing each MIA required in our work.
1. For instance, considering the Min-K% MIA, this would mean performing a single forward pass on each example, and aggregating the token-wise loss to calculate the final metric score. Other methods are perturbation-based or reference model-based, requiring two forward passes.
2. In the attached code, we provide detailed method to perform each of the 50 MIAs we performed in `metrics.py`. Many of the MIAs are dependent on each other, for example, all the Min-K% and Max-K% MIAs can be computed in a single forward pass. Overall, in practice we only require 17 forward passes to perform all the attacks.
3. Given that most datasets require less than 500 examples (of train and val) to perform dataset inference. This would mean performing on the order of 1K forward passes. In our experiments, each forward pass is of context length 1024. But can change depending on the model configuration.
4. Finally, most of this computation can be batched to perform many forward passes together. Given modern day hardware, this cost is low enough that it can be performed in about 4 hours of an A6000 GPU node even for models like pythia-12B (for one dataset).
5. The final cost of “learning” which MIAs to use is almost negligible, as it just requires to fit a 50 dimensional linear layer to a few values (can be done on CPUs in less than 30 seconds).
To conclude, the method is indeed feasible for model with more parameters. We tested on models from the Pythia suite, where the biggest one has 12B parameters but the method can be easily run for models with 70B parameters (like Llama 2) or larger.
### **Limitations of Pile Dataset + Pythia**
We agree that repeating the experiments on more models and datasets would lead to stronger empirical supports. When it comes to open-weights + open-source pretrained models released online, to the best of our knowledge, the Pythia suite of models is the only one releasing complete information about its training + validation dataset (along with the more recent [Olmo model series](https://github.com/allenai/OLMo) which only came out after we finished our experimentation). Without this information, we cannot reliably know if a point was part of the pretraining set and thus cannot reliably validate the proposed method. We would like to emphasize that even though the Pythia suite and Pile Dataset may seem like one single entity, this is actually a collection of 4 models x 20 data subsets, that offers significant generality to our method, much beyond any other random split of publicly available data might offer.
### **Limitation of application to datasets (and not single strings of text)**
We believe that in the modern landscape of generative AI, victims naturally contain multiple sequences of text on the internet. For instance, an article in the New York Times would be composed of multiple sequences of context length 1024. Similarly, the same organization, like NYT has multiple articles that it together wants to own copyright over. Victims who will file legal suits will likely always possess such characteristics. Hence, it is quite natural that we are in a scenario where individual sequences no longer exist in isolation (unlike past classification datasets in the vision space). This setting is also relevant to creative artists having many photographs, and paintings.
### **Existence of IID validation dataset**
We acknowledge that obtaining labeled validation data sampled from the same distribution as the test data can be challenging. Indeed, this is the number one open problem that our work creates for future research. That said, a significant part of this work was spent in formalizing a framework. Here are quite a few scenarios where the IID setting should roughly happen in practice:
- **Editorial Process**: Thousands of NYT articles reach the editor's desk but never get published. These unpublished articles are likely to be more IID compared to two random splits of the Pile dataset.
- **Book Drafts**: This is also very common with book writers when working on a chapter draft. These drafts would be more IID than random splits of the Pile, in our opinion.
Some potential directions for bypassing the IID problem in future work also include:
1. **Using Synthetic Data**: Create IID sets artificially using synthetic data generation techniques.
2. **Storing Hashes**: Creators may store hashes of IID data at the time of publication to ensure data integrity.
3. **Proactive Storage**: More proactive work to store generations of artistic work rather than only the final stage.
----
Once again, we thank you for the constructive feedback on our work. Working on the pointers has helped us improve the quality of our analysis. We look forward to resolving any remaining concerns during the rebuttal-response period.
---
Rebuttal Comment 1.1:
Comment: Thanks, it would have been nice to add more experiments on the Olmo model series once these models were released.
I keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: We understand the interest in additional experiments on the Olmo model series and put sincere efforts to this end over the last few days. However, after a thorough investigation, we recognized that there are limitations in the way the data is provided that prevent the results of dataset inference from being conclusive (results and constraints shared below).
The main issue is that while evaluation sets are provided per data source (for instance, the [Wikitext validation data](https://github.com/allenai/OLMo/blob/0bc7f6c704baf040b8545de943bb015d1c3e5970/configs/official/OLMo-1B.yaml#L115)), the training data is fully mixed across all sources which we confirmed by downloading the [data files](https://github.com/allenai/OLMo/blob/0bc7f6c704baf040b8545de943bb015d1c3e5970/configs/official/OLMo-1B.yaml#L198) provided. This data is stored in terms of tokens fed to the model, without clear links to their original sources (instructions on [inspecting the training data](https://github.com/allenai/OLMo?tab=readme-ov-file#inspecting-training-data)). This makes dataset inference on a specific source, such as Wikitext, impractical with the current setup.
We attempted a dataset inference test using Wikitext's validation versus train batch data (linked above), and it resulted in a trivially low p-value (<1e-34) with 500 samples. While this may seem like a positive result, we believe that the non-iid nature of the data may have a part to play here. Given these constraints, we believe that the extensive experiments we have already conducted on 20+ domains of the Pile dataset and across 4 different model sizes represent a substantial and rigorous evaluation of our method. These experiments provide valuable insights into the model's generalizability and soundness---knowing the ground truth is critical to capturing the soundness of the method. We want to re-emphasize that while the Pile dataset may sound like one homogenous entity, it is a collection of multiple domains and the closest resemblance to how dataset inference will happen in practice---authors with individual distributions (like the New York Times) will claim they were trained on.
Have we been able to resolve the other 5 concerns you had in the initial review? | Rebuttal 1:
Rebuttal: We appreciate the positive, encouraging, and constructive feedback. We are pleased that the reviewers recognize the significance of the problem (Reviewer VFio), consider the paper well-written (Reviewers VFio, pZrT, G1MR), and found it enjoyable to read (Reviewer Nmtj). Our work motivated by showing that MIAs for LLMs detect distribution shift rather than actual membership inference was appreciated by all reviewers and described as inspiring (Reviewer Nmtj).
We designed a method to detect if a given dataset was used to train an LLM, a contribution recognized by all reviewers. Ensuring clarity and comprehensibility was crucial (Reviewer VFio), and we are glad this was effectively conveyed. The LLM dataset inference method is robust and does not return any false positives (Reviewers NHN3, pZrT). The experimental results are comprehensive (Reviewer bhaD), and we hope the released code will be a valuable asset for other researchers. Overall, the paper provides a compelling argument for shifting the focus from individual string-based MIAs to dataset-based inference methods (Reviewer G1MR). | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper addresses the problem of Dataset Inference in Large Language Models - namely, the ability to detect whether the given dataset was used by LLM developers in pre-training.
First, authors demonstrate the importance and feasibility of Dataset Inference task in comparison with Memebership Inference Attacks. Namely, they show that existing MIA methods cannot derive membership when test examples are sampled from the same distribution as training data.
Second, they propose novel method of dataset inference, based on aggregation of several MIA scores as features for linear classifier. According to their experiments, the proposed apprioach does not provide any false positive results, discovering subsets of LLM training dataset with high accuracy. The algorithm requires only 1000 data points from the whole dataset to discover if it was used for training.
The proposed method is tested on several PILE subsets, using the Pythia model family as the base model. Authors demonstrate that the efficiency of the proposed method increases with the growth of the model size.
Strengths: + Authors demonstrate the failure of the SOTA MIA method (top-k% score) on different subsets of the PILE dataset. Namely, they prove experimentally that this method detects distribution shift rather than actual membership inference. Besides, they show that among several considered MIA scores, no one has consistent efficiency over different domains of PILE. This analysis is essential for future research in this direction
+ Authors propose novel approach for dataset infenrence, based on aggregation of several MIA scores. The efficiency of the proposed method is demonstrated by testing on wide range of domains, and with several model sizes.
+ The method has shown its efficiency in the problematic case of the IID data: namely, when suspect and validation data are derived from the same dataset by random sampling
+ The overall detection quality, demonstrated by experiments, is extremely high on all tested domains.
Weaknesses: 1. The proposed method requires providing suspect and valid datasets with the same distribution. In a practical situation, it is not clear how to obtain this (see Q2 to authors).
2. Authors claim that only 1000 examples are enough for Dataset Inference. Meanwhile, train and valid set are also required from the same data distribution. On practice, the method is not tested in real few-shot regime.
3. The method is tested on the single model's family, and the efficiency is demonstrated only on the largest model of the family.
4. The method is tested on PILE dataset, with the access to the validation set from the same distribution, which was not used for raining for sure. There is no experiments with less clean data or less transparent model.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. It is not clear why top-k% MIA is chosen to demonstrate general MIA failure. First, according to the original paper [32], this MIA, by design, should be stable for paraphrasing; that's why it is tested on the deliberately collected dataset (WikiMia) with different *events* distribution, not just different strings.
The setup of the current paper is different (and more difficult): the authors aim to detect membership of the dataset when the whole data distribution is shared between suspect and validation sets. Figure 5(a) confirms that min-k% MIA is not suitable for this setup. At the same time, other methods provide a much more informative signal. Why don't you check, e.g., perturbation-based approach, claiming that existing MIAs cannot detect membership?
Q2. The proposed method requires providing IID data for check and the "equivalent" set of data for validation. Is it possible to define the notion of "IID" more strictly, for practical use? E.g., suppose the author provides a book as "suspect" data. Is the unprinted chapter of the book "IID," if it may contain novel events and characters? On the other hand, can we consider the draft version of the book as IID, taking into account that the final version was post-processed by the editor and layout designer?
What if I want to check the presence of some benchmark data in pre-training of some LLM with open weights. Which "suspect" and "valid" data should I use?
Q3. Is it possible to reduce the amount of used features? E.g. what is the drop of performance if the top-k% score is excluded from the feature set of the classifier? Which features are the most informative in general?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The results of the paper was not checked in realistic settings. In general, it is not clear if the method can be applied to the analysis of the existing models with grey-box access.
Authors claim that their approach is extremely stable, and dies not provide any false positives; but the experiments are not enough for such claim.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and comments!
### **W1, Q2: Existence of IID validation dataset**
We acknowledge that obtaining labeled validation data sampled from the same distribution as the test data can be challenging. Indeed, this is the number one open problem that our work creates for future research. That said, a significant part of this work was spent in formalizing a framework. Here are quite a few scenarios where the IID setting should roughly happen in practice:
- **Editorial Process**: Thousands of NYT articles reach the editor's desk but never get published. These unpublished articles are likely to be more IID compared to two random splits of the Pile dataset.
- **Book Drafts**: This is also very common with book writers when working on a chapter draft. These drafts would be more IID than random splits of the Pile, in our opinion. As the Reviewer raised in Q2, independent chapters of books may not be IID, but their progression will likely yield an IID set. However, this is up to debate and in practice we need an IID verifier (in future work) as opposed to arguing about the same. One strong baseline for the same is the recent “Blind MIAs” work by Das et. al. https://arxiv.org/abs/2406.16201. If any pair of datasets can be distinguished based on these blind membership inference attacks, they should be deemed **not** IID. Future work can improve upon this.
At the same time, some potential directions for bypassing the IID problem in future work also include:
1. **Using Synthetic Data**: Create IID sets artificially using synthetic data generation techniques.
2. **Storing Hashes**: Creators may store hashes of IID data at the time of publication to ensure data integrity.
3. **Proactive Storage**: More proactive work to store generations of artistic work rather than only the final stage.
### **W2: Requirements on amount of data**
In practice, the size of the training and validation datasets can be exactly equal to the size of the dataset used for testing the attack. This means that even if a potential victim has only 1000 sequences of 1024 length context of text data that was trained on (along with an equivalently sized validation set), this is a sufficient condition for performing dataset inference. As shown in Figure 6, the number of examples needed for dataset inference to be successful decreases with model size and also depends on the dataset, which means we may require less than 1000 sequences in practice.
### **W3: The method is tested on the single model's family**
The Pythia suite of models, trained on the PILE dataset, is unique in providing complete access to both the training dataset and the training procedures. This transparency is crucial as it allows for rigorous and replicable experimentation, ensuring the validity and reliability of our results. Without the information provided along the Pythia suite of models, we cannot reliably know if a point was part of the pretraining set. This could for instance lead us to overestimate the performance of our method, as was the case for prior work. To the best of our knowledge, no other set of LLMs offers the level of accessibility that Pythia models do. Our work is a call to action for the community to provide more model releases with comparable levels of transparency.
### **W3: and the efficiency is demonstrated only on the largest model of the family**
We also demonstrated the efficiency of our method on other models from the suite, in addition to the largest one. Please refer to Figure 10 in the Appendix, which is an extended version of Figure 6a (from the main paper) with different model sizes. An interesting observation is that the curves for median p-values and Wikipedia’s p-value stay similar with respect to models of different sizes. However, this is not the case for the max p-value curve. This indicates that dataset inference does not rely on a large number of data points or model parameters for most datasets, whereas they may be necessary for some particular datasets which smaller models do not learn well.
### **W4: Less clean data and less transparent model**
For less transparent models, there is no ground truth to determine whether a specific data point was used in training or not. Even if we hypothesize that a model is trained on a particular data source, such as Wikipedia, we lack access to an unseen IID validation set. Note that simply collecting Wikipedia articles published after the model’s release is insufficient due to the temporal shift in concepts. Since this is the main limitation of prior work, we opted to study the Pythia suite of models in our experiments.
### **Q1: Other MIAs**
We fully agree that the top-k% MIA is not representative enough to claim the failure of all MIAs. This is why we tested all MIAs we could find including the perturbation-based ones and reported these results in Figure 7 -- we apologize that due to the large amount of MIAs we tested this figure cannot fit in the main body of the manuscript, we will either add it to the main paper (using the additional 1 page if accepted) or try to and emphasize the reference to the Figure in the main body.
### **Q3: Features used in dataset inference**
We performed an ablation study for the features used in dataset inference and the results are shown in Figure 5(a) with a more detailed version in Figure 8 in the Appendix. We did not find that any particular features were most informative across all datasets. Instead, feature informativeness is highly dataset dependent--most of the features contribute positively for some datasets but negatively for others.
---
Once again, we thank the reviewer for the constructive feedback on our work. Working on the feedback has helped us improve the quality of our work. We look forward to further discussions to clarify any concerns that remain. | Summary: This paper tackles the dataset inference problem to detect a specifically trained dataset such as a licensed dataset. Firstly, the authors claim that previous membership inference attacks (MIAs) are not successful in discriminating between members and non-members from the same distribution (iid), which is a less realistic setting. The authors propose selectively combining multiple membership inference metrics by linear regression. Experimental results demonstrate its efficacy on the PILE dataset by aggregating 52 different MIA metrics.
Strengths: - The problem in this paper is important, regarding the high risk where copyright text is used for training current LLMs.
- The paper is well written, especially summarizing previous related work and methods. It’s easy to follow.
- This paper criticizes the problem settings of previous papers and proposes more realistic scenarios — e.g., iid of suspect and valid dataset.
- The proposed method is simple and straightforward; an ensemble of multiple attack methods outperforms a single metric
Weaknesses: - Absent of hyperparameter gammas. Depending on what gamma value was chosen, each MIA performance would vary.
- It is not explored whether the proposed method is also robust to the unseen dataset beyond the PILE dataset, as well as other models except Pythia.
- The title “LLM Dataset Inference: Detect Datasets, not Strings” is too broad and enables readers to misunderstand as it proposes a “Dataset inference” task and method, whereas the paper indeed proposes a mixture of MIAs to improve robustness.
- Qualitative analysis of samples is absent. Is there any interesting analysis or distinguished result between attack samples (victim data) with low and high p-values?
Technical Quality: 3
Clarity: 3
Questions for Authors: - How did you select the 52 MIA metrics? Sharing the standard you used will be beneficial to other researchers.
- In Figure 6 (b), training set deduplication results, why does the violin plot distribute like sandglass? — i.e., the distribution mass on the higher p-values is thicker than near the p-value of 0.5. Moreover, dedup and non-dedup settings are not described.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors adequately addressed its limitations and broad impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and positive feedback.
### **W1: Absence of hyperparameter gammas.**
We measure the success rate of MIAs using the Area Under the Curve (AUC) metric. The AUC metric is advantageous because it provides a comprehensive measure of the model's performance that is independent of the parameter $\gamma$. This independence ensures that our evaluation is robust and not influenced by the specific choice of $\gamma$. As far as dataset inference is considered, it only uses the score given by each MIA, and automatically determines how to weigh each MIA in Stage 2 as described in the paper, hence once again not needing the same parameter.
### **W2: Explore datasets beyond the Pile dataset and other models except Pythia.**
The Pythia suite of models, trained on the PILE dataset, is unique in providing complete access to both the training dataset and the training procedures. This transparency is crucial as it allows for rigorous and replicable experimentation, ensuring the validity and reliability of our results. Without the information provided along the Pythia suite of models, we cannot reliably know if a point was part of the pretraining set. This could for instance lead us to overestimate the performance of our method, as was the case for prior work. To the best of our knowledge, no other set of LLMs offers the level of accessibility that Pythia models do. Our work is a call to action for the community to provide more model releases with comparable levels of transparency.
### **W3: The title is too broad. The paper proposes a mixture of MIAs.**
While dataset inference builds upon membership inference, it argues for a new paradigm of privacy research where we consider the membership of an entire dataset rather than individual text sequences. To clarify our stance, we are only using a mixture of MIAs to perform the task of dataset inference. Even when each MIA performs close to (and often worse than) random, dataset inference can tease out statistical signals from weak membership attacks.
### **W4: Qualitative analysis of low and high p-values?**
Our p-values for true positive cases are significantly below the set threshold of 0.1, while the p-values for true negative cases are substantially above this threshold. However, p-values alone are rather not suitable for qualitative analysis due to their nature as a probabilistic measure rather than a direct indication of effect size or practical significance. Instead, we can use the number of data points required for our LLM dataset inference method. We observe that structured and well-formed datasets, such as Wikipedia, require fewer data points (i.e., 50) for effective inference compared to less structured datasets like Pile-CC (which requires 300 data points), which contain raw web pages. The table below presents the number of data points required for our method per dataset on the Pythia-12b model.
|Dataset|Number of data points for our method|
|-|-|
|Pile-CC|300|
|PubMedCentral|700|
|Books3|100|
|OpenWebText2|300|
|ArXiv|300|
|Github|500|
|FreeLaw|400|
|StackExchange|50|
|USPTOBackgrounds|150|
|PubMedAbstracts|<=10|
|Gutenberg(PG-19)|100|
|OpenSubtitles|150|
|Wikipedia(en)|50|
|DMMathematics|400|
|UbuntuIRC|<=10|
|BookCorpus2|20|
|EuroParl|50|
|HackerNews|50|
|YoutubeSubtitles|20|
|PhilPapers|<=10|
### **Q1: How did you select the 52 MIA metrics?**
We incorporated all available MIAs at the time of developing our method and further extended them to extract additional features, thereby capturing more information and enhancing the success of dataset inference. For instance, rather than using a single K parameter for the MinK-Prob MIA [32], we employed multiple values of K. We performed an ablation study for the features used in dataset inference, and the results are shown in Figure 5(a) with a more detailed version in Figure 8 in the Appendix. We did not find that any particular features were most informative across all datasets. Instead, feature informativeness is highly dataset-dependent--most of the features contribute positively for some datasets but negatively for others. However, we do see some redundancy between many of the Min-k and Max-k% MIAs, hence, they can be ignored if required without a performance tradeoff. That said, there is no computation overhead in computing all of them, as they can be computed in one single forward pass of the model.
### **Q2: In Figure 6 (b), training set deduplication results, why does the violin plot distribute like sandglass?**
The main purpose of Figure 6 (b) was to observe whether the larger models experience more memorization, and how this can allow performing dataset inference more reliably. We would refer you to Figure 10 in the Appendix which provides a better breakdown of how different model sizes perform for dataset inference. Once again, when considering p-values, since it is a statistical test, we generally do not look at the qualitative distribution of the same, and only their binary position below or above the chosen significance threshold. Overall this suggests that for smaller models, there may be some false negatives (because they do not learn the distribution),
### **Describe dedup and non-dedup settings.**
We tried to explain the setup for the experiment in the description of Figure 6 (b) and lines 331-332. However, we will rewrite it to clearly state that: *Deduped* denotes a version of the Pile dataset where the documents are deduplicated within, and across the data splits (train/test/validation). *Non-Deduped* is the original version of Pile without any deduplication.
---
Working on the feedback helped us improve the quality of our work. We look forward to further discussions to clarify any concerns that remain. | null | null | null | null |
Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data | Accept (poster) | Summary: The paper introduces the Federated Transformer (FeT), a novel framework for Vertical Federated Learning (VFL) that addresses the challenges of fuzzy identifiers in multi-party scenarios. It incorporates three key innovations to enhance performance, privacy, and reduce communication overhead: positional encoding averaging, dynamic masking, and a party dropout strategy. The approach significantly outperforms baseline models in scalability and privacy preservation.
Strengths: 1. The integration of a transformer architecture for managing fuzzy identifiers in multi-party VFL is innovative. The dynamic masking and party dropout strategies are creative solutions that enhance scalability and reduce communication costs.
2. The proposed model is well-articulated, with rigorous experimental validation that demonstrates substantial improvements over existing methods. The integration of differential privacy and secure multi-party computation strengthens the privacy aspect.
3.The paper is well-organized with clear explanations of the problems, proposed solutions, and results. It is accessible to readers with a background in federated learning.
4.The results show significant improvements in both performance and privacy, making it a valuable contribution to the field of federated learning, especially in applications involving sensitive data across multiple parties.
Weaknesses: 1.The performance improvements reported are impressive; however, how dependent are these improvements on the initial conditions of data alignment and the distribution of fuzzy identifiers? Could the authors provide insights on the robustness of the Federated Transformer (FeT) under less ideal conditions?
2. While the paper discusses scalability extensively, there is less focus on the computational resources required. Could the authors comment on the computational overhead and the practicality of deploying FeT in real-world scenarios with potentially limited resources?
3.The FeT introduces several complex mechanisms such as dynamic masking and positional encoding averaging. How do these additions affect the training time and complexity of the model? Is there a significant trade-off between performance and efficiency?
4.The experiments are conducted primarily on synthetic datasets. How well does FeT generalize to other real-world datasets, particularly those with higher levels of noise and less structured data?
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see Weakness and Limitations.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1.I think you can give us a better case diagram to help us better understand your new task.
2.I think there is an update error in line 16 of your algorithm.
3.In the Party Dropout section of the article, around line 188, you said that the communication overhead on the primary party can be reduced by up to 80%, but I did not find the corresponding experiment or proof in the article. I think this part needs to be better explained.
4.The experimental settings focus on synthetic and controlled environments. Real-world applications might introduce variables not accounted for in this study, potentially affecting the generalizability of the results.
5.The complexity of the model and the required computational resources are not thoroughly discussed, which could be a limitation for practical deployment in resource-constrained environments.
6.While the model addresses fuzzy identifiers, the performance heavily relies on the quality of the linkage, which may not always be feasible or accurate in real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments; we have addressed all concerns below.
W1, L6. **How FeT Relies on Linkage Quality**: FeT is significantly **less reliant** on initial linkage quality than traditional VFL models like Top1Sim [4,5]. While Top1Sim trains only on records linked by privacy-preserving record linkage (PPRL) [3], FeT requires PPRL only to **approximate the K-nearest neighbors** and trains directly on these records, without needing them ordered by distance. Evidence for FeT's robustness includes:
- **Ablation Study on K** (Figure 14, "rebuttal.pdf"): FeT consistently outperforms baselines with many unrelated data records included (K > 50), showing resilience to low linkage quality.
- **Performance with Different Fuzziness** (Figure 12, "rebuttal.pdf"): FeT outperforms baselines with moderate to high noise in identifiers, demonstrating strong resilience to fuzziness.
- **Dynamic Masking Visualization** (Figure 11, "rebuttal.pdf"): The visualization demonstrates that dynamic masking plays a crucial role in handling low-quality linkage, effectively focusing on a few close and relevant records among 4,900 fuzzily linked ones.
**Performance of FeT in "Less Ideal" Cases**: FeT performs better in **general fuzzy VFL scenarios** but slightly underperforms Top1Sim [4,5] in **ideal cases with precisely matched identifiers**. This limitation is discussed in Appendix D with experiments on VFL datasets with randomly assigned precise IDs. Moreover, as shown in Figure 12 of "rebuttal.pdf," FeT only slightly underperforms compared to Top1Sim in low-noise conditions of `gisette` dataset.
W2, W3, L5. **Why Focus on Performance Over Efficiency?**: In multi-party fuzzy VFL, the main challenge is fuzzy VFL models like FedSim [2] **perform worse than Solo training** in multi-party setting, instead of computational efficiency. FeT, as the first effective approach for multi-party fuzzy VFL, prioritizes performance to address this critical gap. For clarity, we will change the title to "_Federated Transformer: **Multi-Party** Vertical Federated Learning on Practical Fuzzily Linked Data_" and other related contents.
**Efficiency of FeT**: We compared FeT's computational and memory efficiency with FedSim [2], the state-of-the-art fuzzy VFL model, as shown in Table 11 of "rebuttal.pdf," and identified three key findings:
1. **Parameter Efficiency**: FeT has a comparable or even smaller (23%-129%) number of parameters than FedSim, indicating that its performance improvement is due to model design rather than parameter scaling.
2. **Memory Efficiency**: FeT is significantly more memory efficient, consuming only 20-39% of the memory compared to FedSim. However, this efficiency comes at the cost of training speed. FeT performs neighbor search in parallel during real-time training, whereas FedSim spends hours linking top-K neighbors and preloads all linked neighbors into GPU memory, leading to longer linkage times, repeated data records, and higher memory usage.
3. **Overhead of New Components**: As detailed in Table 11 of "rebuttal.pdf," the additional components in FeT - dynamic masking and positional encoding - add minimal extra parameters (1k - 0.4M), causing only a slight computational overhead (0-5 seconds per epoch slowdown).
In summary, FeT delivers **better performance and improved memory efficiency with a similar number of parameters** compared to FedSim, despite slightly lower training speed. Further optimization techniques, such as pipeline parallelism, could enhance FeT’s training speed but are beyond this study's scope.
**VFL with Limited Resources**:
1. **FeT's Limited GPU Memory Performance**: Table 11 ("rebuttal.pdf") shows that FeT performs well even with GPU memory under 1 GB, making it viable for resource-constrained VFL scenarios.
2. **Cross-Silo VFL Prevalence**: Cross-device VFL with limited resources is rare. VFL typically occurs in cross-silo settings [3], involving dozens to hundreds of parties with adequate computational power. Discussions with industry collaborators providing commercial VFL services confirm that it's uncommon for a single user to have features spread across thousands of distinct parties.
W4, L4. Three of our datasets (`house`, `taxi`, `hdb`) are real-world VFL datasets, identical to those used in FedSim [2]. Each party's data comes from different real sources; for example, the `taxi` dataset includes data from New York taxis and CitiBike. These datasets effectively demonstrate FeT's performance in real VFL applications.
L1. In Figure 15 of "rebuttal.pdf," we present a real-world application involving travel cost prediction in a city through collaboration among taxi, car, bike, and bus companies. Since personal travel information is private and cannot be shared, VFL is essential. Additionally, route identifiers - starting and ending GPS locations - can only be fuzzily linked, but linking closely related source and destination points with multi-party fuzzy VFL can significantly improve prediction accuracy.
L2. We have fixed this typo in the revision.
L3. The 80% communication reduction is straightforward from calculations, as the communication reduction is nearly proportional to the party dropout rate. Dropped parties don't participate in gradient and representation exchanges, which account for most communication in VFL.
**References**
[1] Li, et al. "Learnable fourier features for multi-dimensional spatial positional encoding." NeurIPS 21.
[2] Wu et al. "A coupled design of exploiting record similarity for practical vertical federated learning." NeurIPS 22.
[3] Vatsalan et al. "Privacy-preserving record linkage for big data: Current approaches and research challenges." Handbook of big data technologies, 17.
[4] Hardy et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv 17.
[5] Nock et al. Entity resolution and federated learning get a federated resolution. arXiv, 18.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. According to your response and other reviewer's comments, I change the rating as "Borderline Reject". | Summary: This paper investigates vertical federated learning (VFL) linked with fuzzy identifiers and develops a new framework called Federated Transformer (FeT). The proposed framework leverages the transformer architecture to encode the identification information and distributes the subnets of the transformer across parties.
Strengths: 1. The paper tackles the important topic of achieving multi-party fuzzy VFL with both promising performance and robust privacy.
2. The design of using a transformer to encode fuzzy identifiers is quite novel.
3. The code is publicly available.
Weaknesses: 1. The paper is not well motivated. It would be great if the paper could provide concrete use cases of fuzzy VFL, that involves fuzzy data/linkages while requiring the privacy needs of distributed machine learning. The German Record Linkage Center might not be an appropriate example for federated learning.
2. Several concepts in fuzzy VFL need further clarification. For example, what strategies are used in existing fuzzy linkage? How are the fuzzy identifiers presented? How is privacy preserved in fuzzy identifiers?
3. The design rationale behind dynamic masking is unclear. Why is dynamic masking necessary? How does it function?
4. The novelty of party dropout needs further elaboration. Dropout is widely used in federated learning. How does the proposed approach differ from existing designs?
5. It is unclear why the proposed framework employs two different privacy mechanisms: differential privacy and MPC. What unique challenge is each privacy mechanism designed to address?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please provide more concrete use cases of fuzzy VFL.
2. Please clarify the key concept in fuzzy VFL.
3. The design rationale behind the key components in the proposed framework is unclear. Please see my comments above and articulate them in more detail.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments; we have addressed all concerns as follows.
W1, L1. **Real Application of Multi-party Fuzzy VFL**: In Figure 15 of "rebuttal.pdf," we present a real-world application involving travel cost prediction in a city through collaboration among taxi, car, bike, and bus companies. Since personal travel information is private and cannot be shared, VFL is essential. Additionally, route identifiers - starting and ending GPS locations - can only be fuzzily linked, but linking closely related source and destination points with multi-party fuzzy VFL can greatly improve prediction accuracy.
**German Record Linkage Center (GRLC) and VFL**: Previous VFL studies [1-3] show that record linkage is a necessary preprocessing step for VFL, making it inherently involved in all real VFL applications. While VFL studies are relatively new, record linkage has been extensively studied and widely applied. Thus, our study, along with [1], considers GRLC applications to be reflective of VFL applications.
**Prevalence of Multi-party Fuzzy VFL in the Real World**: Traditional VFL assumes that each data record represents a **user**, making the assumption of a universal user ID natural. However, in real-world applications, these records often represent **various real-world objects** - such as a room (e.g., `house` and `hdb` datasets) or a travel route (e.g., `taxi` dataset). Expecting all real-world objects to have a universal, precise ID is unrealistic, which is why case studies in [1] found that over 70% of applications cannot be linked exactly.
W2, L2. We adopt privacy-preserving record linkage (PPRL) [4] for similarity calculation. PPRL is a well-studied area separate from VFL, and we do not impose constraints on specific PPRL approaches, including linkage strategies, representation of fuzzy identifiers, or privacy mechanisms. For example, FEDERAL [5], a PPRL method, transforms fuzzy identifiers into Bloom filters that offer provable privacy guarantees. These Bloom filters' similarities reflect the original fuzzy identifiers' similarities. Our approach, like other fuzzy VFL algorithms [1-3], operates on these calculated similarities. While changing the PPRL method might impact the quality of the similarity measures, it does not affect FeT's superiority compared to other VFL algorithms.
W3, L3. **Design Rationale**: Dynamic masking filters out unrelated data records with low similarities before they reach deep attention layers. FeT feeds the top-K similar records into the transformer, but this can introduce many unrelated records, leading to overfitting. To prevent this, a simple MLP creates a mask based on current identifiers, allowing the attention layers to focus on a narrower neighborhood and reducing overfitting.
**Why Dynamic Masking is Necessary**: In our ablation study (Table 3 of Appendix C.1), removing dynamic masking (FeT w/o DM) results in significant performance loss across all five datasets. For example, on `MNIST`, training without dynamic masking led to a 13 percentage point drop in accuracy, demonstrating its necessity.
**How Dynamic Masking Functions**: The dynamic masking module, a simple two-layer MLP, takes identifiers as input and outputs a mask added to the attention keys (used in `torch.nn.MultiHeadAttention` as `key_padding_mask`). The learned dynamic masks, visualized in Figure 11 of "rebuttal.pdf," reveal two key observations:
- Dynamic masking effectively focuses on a localized area around the primary party's identifiers without accessing them directly. Records with distant identifiers on secondary parties (shown in cool colors) receive small negative mask values, reducing their significance in the attention layers.
- The focus area varies in scale and direction across samples, indicating that the dynamic masking layer generates sample-specific masks to reduce overfitting.
W4. Party dropout is a novel feature in VFL, distinct from traditional Dropout, introduced alongside our split-sum neural network design. **Almost all previous VFL** models [1-3,6] use a split-concat design, where representations are **concatenated** at the cut layer and passed to the aggregation model, requiring all parties to be present in each training step. In contrast, we use secure multi-party computation to **average** the representations, allowing some parties to be absent. This enables party dropout, which **disables training of entire encoders** of some parties in each step, rather than specific layers. Combined with SplitAvg, party dropout significantly reduces communication costs in multi-party VFL by the dropout ratio.
W5. Our primary goal is to protect representations from secondary parties against an honest-but-curious primary party. We achieve this by applying differential privacy, adding noise to the representations. In VFL, unlike horizontal FL, data across parties are **related**. Thus, the primary party gains **access to more related representations as more parties join**. This typically requires adding more noise to maintain the same level of differential privacy, thus reducing utility. To address this, we use secure multi-party computation, letting the primary party know only the sum of the representations, keeping noise levels constant regardless of the number of parties.
**References**
[1] Wu et al. "A coupled design of exploiting record similarity for practical vertical federated learning." NeurIPS 22.
[2] Nock et al. "The impact of record linkage on learning from feature partitioned data." ICML 21.
[3] Nock et al. Entity resolution and federated learning get a federated resolution. arXiv 18.
[4] Vatsalan et al. "Privacy-preserving record linkage for big data: Current approaches and research challenges." Handbook of big data technologies, 17.
[5] Karapiperis et al. "FEDERAL: A framework for distance-aware privacy-preserving record linkage." TKDE 18.
[6] Liu, et al. "Vertical federated learning: Concepts, advances, and challenges." TKDE 24.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Most of my concerns have been addressed. I will raise my score. | Summary: This paper introduces the Federated Transformer (FeT) framework, which is designed to support multi-party VFL.
It enhances the efficiency of model training among multiple parties using fuzzy identifiers while ensuring data privacy.
Experiment results show that FeT performed well when it scaled to 50 parties and the authors also provide theoretical proof of the differential privacy.
Strengths: **Strengths of the Paper**
**Originality**
The biggest novelty of the work is the introduction of the Federated Transformer (FeT) framework. It presents a new aspect of handling vertical federated learning (VFL) with fuzzy identifiers.
- **FeT Structure**: The work introduces the architecture of FeT in detail. The model can be split into two parties: the primary party and the secondary party. It also includes several innovative techniques including Dynamic Masking, Party Dropout, and Positional Encoding Averaging.
**Experiment**
The authors compare the performance of the Federated Transformer (FeT) against multiple baseline methods which is a robust benchmarking. The experimental results demonstrate that FeT significantly outperforms baseline models, achieving improvements of up to 46 percentage points when scaled to 50 parties, which is a significant performance gain.
**Significance and Future Impact**
FeT is suitable for multimodal learning which aligns many federated learning scenarios in real life while protecting individual privacy.
Weaknesses: **Weaknesses of the Paper**
**Model Design**
1) The introduction of a trainable dynamic masking module aims to improve the exclusion of incorrectly linked data records. However, if this module is not well-optimized, it could introduce additional noise or errors which degrades the model performance.
2) The proposal of positional encoding averaging may not available in all VFL scenarios, which may cause inconsistencies in data processing which may affect the model training.
**Privacy Trade-offs**
The paper says that stringent privacy safeguards may cause accuracy reductions, particularly with low values of $\epsilon$. It would be better if the authors could provide a more detailed analysis of the trade-offs between privacy and performance.
**Theoretical Proof**
While the paper provides some theoretical proofs, it may not offer a complete set of proofs for all claims made. The theoretical section did not adequately discuss the limitations of the proposed methods or the implications of violating the underlying assumptions. A more thorough exploration of these aspects would enhance the robustness of the theoretical framework.
**Experiment**
The experiments are conducted on a small number of datasets, which may not represent the full range of scenarios in VFL. Furthermore, The experiments explore little the trade-off between privacy guarantees and model utility, especially at low values of ε in differential privacy.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Questions**
1\. **Scenario Application**:
How can FeT be adapted to work with unstructured data types such as text or images? Is there any preliminary results or insights on applying FeT to these types of data?
2\. **Privacy-Performance Trade-off**:
Can you provide a more detailed analysis of the trade-off between privacy and performance, especially at different levels of $\epsilon$?
3\. **Model Performance on datasets with different features**:
How does the performance of the Federated Transformer (FeT) vary when applied to datasets with significantly different feature distributions across multiple parties?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have addressed the limitations of their work by identifying three primary limitations related to the assumptions of common features across parties, the potential accuracy reductions due to stringent privacy safeguards, and the correlation between keys and data representations.
Here are some other potential limitations that may be considered:
**Large scale cases**: The model's performance may have the risk of degrading with a large number of parties or larger datasets, which may limit its application in certain large federated learning scenarios.
**Privacy Trade-offs**: While the model is designed to enhance privacy, there may still be some vulnerabilities to be exploited, especially when the underlying assumptions about data sharing are violated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments; we have addressed all concerns as follows.
W1.1 **Design of Dynamic Masking**: Dynamic masking is a simple two-layer MLP that can be easily optimized and performs well across all five datasets. Our ablation study in Table 3 of Appendix C.1 demonstrates that dynamic masking **consistently and significantly improves performance on all five datasets**. For example, on the `MNIST` dataset, omitting dynamic masking results in a 13 percentage point drop in accuracy. To provide further insight, we **visualize the learned dynamic mask in Figure 11** of the attached "rebuttal.pdf."
This visualization reveals two key observations: First, dynamic masking effectively focuses on a localized area around the identifiers of the primary party. Data records with more distant identifiers on secondary parties (shown in cooler colors) are assigned small negative mask values, reducing their influence in the attention layers. Second, the focus area varies in scale and direction across different samples, indicating that the dynamic masking layer learns to generate sample-specific masks, which helps to reduce overfitting.
W1.2 **Design of Positional Encoding (PE) Averaging**: This design is specifically applicable to VFL scenarios with independent and identically distributed (i.i.d.) identifiers, similar to the FedAvg approach. VFL with heterogeneous or non-i.i.d. identifiers (e.g., one party using GPS coordinates while another uses postal codes) remains an open problem, posing significant challenges for identifier alignment - an area not yet explored in VFL research. Our ablation study in Table 5 of Appendix C.3 shows that PE averaging improves performance compared to non-PE-averaging FeT (average frequency = 0) in most datasets.
W2 **Privacy Trade-offs**: The trade-off between privacy and performance is illustrated in Figure 5 and Figure 9 of the paper. Both performance (measured by accuracy) and privacy (denoted by $\varepsilon$) are influenced by the noise scale $\sigma$. Generally speaking, $\varepsilon$ reflects the probability bound of determining whether a specific data record exists in the training set based on the representations, regardless of the attack method. Thus, a lower $\varepsilon$ indicates higher privacy. To achieve high privacy, such as $\varepsilon = 1$, a noise scale of approximately $\sigma \approx 8$ is required, which significantly reduces utility, causing FeT to perform similarly to Solo. Conversely, when $\varepsilon$ is larger, such as $\varepsilon = 8$, the required noise scale decreases to $\sigma = 1$, which allows FeT to maintain relatively good performance.
W3. **Theoretical Proof**: The full proof of Theorem 3 is provided in Appendix A. Background theorems and definitions of differential privacy are covered in Section 2, while the threat model is detailed in Section 4. Since Theorem 3 is based on differential privacy, its assumptions inherit the assumptions of differential privacy.
W4 **Experiments on More Scenarios of VFL**: We have added experiments using the VertiBench [1] datasets to cover a broader range of VFL scenarios, as detailed in the attached "rebuttal.pdf." The results demonstrate that: 1) FeT outperforms baselines across varying levels of imbalance between parties, and 2) FeT shows better performance in VFL scenarios where parties have more balanced features.
Q1. **Scenario Application**: Images and texts can be effectively handled by transformer structures, as demonstrated in many existing works [2]. Thus, adapting FeT to other unstructured data should not pose a significant challenge. The two main ideas - 1) encoding common features as positions and 2) dynamic masking -are particularly useful for multimodal alignment in these scenarios. The primary challenge, however, lies in the significant heterogeneity of features across parties. As shown in Figure 13 of the "rebuttal.pdf," although FeT outperforms baselines, its performance on imbalanced features requires further improvement. We are actively working on addressing this issue.
Q2. Please refer to W2. Privacy Tradeoffs.
Q3. **Significant Heterogeneous Features** We have conducted additional experiments using the VertiBench [1] datasets with varying heterogeneity (balance level $\alpha$) across parties, as shown in Figure 13 in the attached "rebuttal.pdf." The results demonstrate that while FeT's absolute performance decreases under severely heterogeneous feature distributions (very low $\alpha$), its improvement over Solo training becomes more pronounced in such heterogeneous scenarios.
L1. **Large Scale Cases**: Unlike horizontal federated learning, Vertical Federated Learning (VFL) is primarily observed in cross-silo scenarios [3] rather than cross-device scenarios [4], as it is unlikely that a single user would have features distributed across thousands or even millions of distinct parties. This observation is supported by our communication with industry collaborators that provide commercial VFL service. Therefore, we focus on the cross-silo case, where collaborations among dozens or hundreds of parties are more common.
L2. **Privacy Trade-offs**: We agree that malicious parties, rather than our assumed honest-but-curious parties, would present a greater privacy risk to the model design. We will acknowledge this limitation and plan to explore such scenarios in future work.
**References**
[1] Wu et al. "VertiBench: Advancing feature distribution diversity in vertical federated learning benchmarks." ICLR 24.
[2] Han et al. "A survey on vision transformer." TPAMI 22.
[3] Huang et al. "Cross-silo federated learning: Challenges and opportunities." arXiv 22.
[4] Karimireddy et al. "Breaking the centralized barrier for cross-device federated learning." NeurIPS 21. | Summary: This paper proposes the Federated Transformer (FeT) framework, which aims to address performance and privacy challenges in multi-party fuzzy vertical federated learning (VFL). FeT leverages the Transformer architecture to encode fuzzy identifiers and distribute training across different parties. The authors introduce three innovative techniques—position encoding averaging, dynamic mask module, and party dropout strategy—to enhance model performance while minimizing computational and communication overhead. Additionally, FeT incorporates a scalable privacy framework that combines differential privacy and secure multi-party computation, effectively safeguarding local data representations and ensuring manageable privacy maintenance costs. Experimental results demonstrate that FeT outperforms baseline models, offering superior performance and enhanced privacy protection. Overall, FeT overcomes the limitations of existing models in multi-party fuzzy VFL, showcasing exceptional performance and practicality.
Strengths: 1. This manuscript proposes the FeT framework. The framework combines federated learning with the Transformer architecture. At the same time, the dynamic mask technique, participant discarding mechanism, and location coding average techniques are applied to improve the model's performance and privacy protection in multi-party fuzzy vertical federation learning.
2. The FeT framework introduces SplitAvg, a hybrid privacy protection mechanism, which effectively reduces the introduction of noise and improves data utility. In addition, the privacy amplification technology enhances the privacy protection effect. This ensures data privacy as well as the efficiency and robustness of the model.
Weaknesses: This manuscript proposes a scalable vertical federated learning for practical fuzzy linked data, aiming to solve the performance and privacy issues of real-world VFL in its application. However, there are still some major issues, as follows:
1. In the Introduction section, the definition of fuzzy identifiers and multi-party fuzzy VFL and the relationship between the identifiers, fuzzy identifiers and multi-party fuzzy VFL is not clearly introduced, which may lead to confusing concepts for readers. In addition, the important role of fuzzy identifiers in linking real-world data sets should be emphasized to prove the practical value of studying multi-party fuzzy VFL.
2. In the training of FeT, PPRL was first used to evaluate the identifier similarity between the primary participant P and each secondary participant. However, the author did not mention this method in the previous content. Based on this, a very important question is: in the comparative experiment, the author did not seem to compare the performance gap, privacy gap, and computing resource consumption gap between the training using only the PPRL method and the training based on the transformer structure in this article?
3. In FeT, the combination of dynamic mask, participant discarding mechanism, and hybrid privacy protection mechanism will inevitably affect the training process's time, computing resource consumption, and privacy protection ability. Therefore, it is also important to compare the training efficiency and computing resource consumption, as well as the privacy protection ability of FeT methods with current mainstream methods. This represents whether FeT can surpass other methods in real-world applications. The setting of the experiment may be as follows: 1. Training time of FeT and mainstream methods on the same data set and under the same conditions. 2. CPU/GPU usage between FeT and mainstream methods in the training process to measure memory consumption and network communication overhead, especially in the case of multi-party participation. 3. Evaluate the privacy protection effect of FeT and mainstream methods through quantified privacy protection indicators (such as ε value of differential privacy).
Minor weaknesses:
1. In lines 27 and 28, the authors describe collaboration between hospitals, financial institutions, and sensors. It is equivalent to juxtaposing the three, which is unreasonable.
2. In lines 58 and 59, the authors mention that experiments have shown that model accuracy has been improved by up to 13 percentage points on a 50-square fuzzy VFL on the MNIST dataset. It is not given here which method is compared with, please add.
3. In line 139, the author defines a common feature shared by all parties as an identifier, expressed as \(x^i = [k^i, d^i] \), where \([\cdot]\) signifies. \([,] \) and\([\cdot]\) is not the same
Technical Quality: 3
Clarity: 3
Questions for Authors: same as the weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments; we have addressed all concerns as follows.
W1. We will include a formal definition in Section 4 and the introduction. Our scenario extends the two-party fuzzy VFL model defined in FedSim [1] to a multi-party setting. Below, we clarify the key terms:
- **Multi-party VFL**: Federated learning tasks involving $k+1$ parties, denoted as $\{S_h\}_{h=0}^k$. Each party $S_h$ has $n_h$ data records, represented as $\mathbf{x}^{S_h} \in \mathcal{R}^{n_h \times (c + m_h)}$, where $c$ is the number of common features shared by all parties, and $m_h$ refers to the unique features held by each party $S_h$.
- **Identifier**: For each data instance $x_i^{S_h} \in \mathbf{x}^{S_h}$, the $c$ common features $k_i^{S_h} \in \mathcal{R}^{1 \times c}$ are termed the **identifier** or **key** of $x_i^{S_h}$. All identifiers for $S_h$ are denoted as $\mathbf{k}_i^{S_h} := \{k_i^{S_h}\}_{i=1}^{n_h}$.
- **Fuzzy Identifier**: In multi-party VFL, if two parties $S_a$ and $S_b$ have identifiers such that $\forall k_i \in \mathbf{k}_i^{S_a}$ and $\forall k_j \in \mathbf{k}_j^{S_b}$, $k_i \neq k_j$, then $\mathbf{k}_i^{S_a}$ and $\mathbf{k}_j^{S_b}$ are **fuzzy identifiers**.
- **Multi-party Fuzzy VFL**: A VFL scenario involving parties with fuzzy identifiers.
**Importance of Fuzzy Identifiers**: The critical role of linkage quality in VFL performance has been shown both empirically [1] and theoretically [2]. This impact is even more pronounced in multi-party settings with increasing noise, as demonstrated by our significant improvement over FedSim in such scenarios.
**Prevalence of Multi-party Fuzzy VFL in the Real World**: Traditional VFL assumes that each data record represents a **user**, making the assumption of a universal user ID natural. However, in real-world applications, these records often represent **various real-world objects** - such as a room (e.g., `house` and `hdb` datasets) or a travel route (e.g., `taxi` dataset). Expecting all real-world objects to have a universal, precise ID is unrealistic, which is why case studies in [1] found that over 70% of applications cannot be linked exactly.
W2. Privacy-Preserving Record Linkage (PPRL) is a linkage algorithm that calculates identifier similarities but lacks learning mechanisms, serving as a preprocessing step for most VFL algorithms. As such, PPRL is not directly comparable to VFL approaches like FeT.
An extension of PPRL, known as Top1Sim [3,4], performs VFL on the most similar records identified by PPRL. We have thoroughly compared Top1Sim as a baseline. However, this simple extension overlooks significant information, which led to the development of learning-integrated approaches like FedSim [1] and FeT. Notably, Top1Sim incurs a 42% higher RMSE compared to FeT on the `house` dataset.
W3. **Efficiency of FeT**: We compared FeT's computational and memory efficiency with FedSim, the state-of-the-art fuzzy VFL model, as shown in Table 11 of "rebuttal.pdf," and identified three key findings:
1. **Parameter Efficiency**: FeT has a comparable or even smaller (23%-129%) number of parameters than FedSim, indicating that its performance improvement is due to model design rather than parameter scaling.
2. **Memory Efficiency**: FeT is significantly more memory efficient, consuming only 20-39% of the memory compared to FedSim. However, this efficiency comes at the cost of training speed. FeT performs neighbor search in parallel during real-time training, whereas FedSim spends hours linking top-K neighbors and preloads all linked neighbors into GPU memory, leading to longer linkage times, repeated data records, and higher memory usage.
3. **Overhead of New Components**: As detailed in Table 11 of "rebuttal.pdf," the additional components in FeT - dynamic masking and positional encoding - add minimal extra parameters (1k - 0.4M), causing only a slight computational overhead (0-5 seconds per epoch slowdown).
In summary, FeT delivers **better performance and improved memory efficiency with a similar number of parameters** compared to FedSim, despite slightly lower training speed. Further optimization techniques, such as pipeline parallelism, could enhance FeT’s training speed but are beyond this study's scope.
**Privacy of FeT**: Figures 5(b, d) and 10 in the paper show FeT's privacy superiority. Since related studies use similar Gaussian noise mechanisms, we compare the noise scale at the same $\varepsilon$ rather than accuracy. At the same $\varepsilon$, FeT requires much less noise than RDP, a previous pure DP approach like FedOnce [5], generally leading to better model utility.
MW1. **Real Application of Multi-party Fuzzy VFL**: In Figure 15 of "rebuttal.pdf," we present a real-world application involving travel cost prediction in a city through collaboration among taxi, car, bike, and bus companies. Since personal travel information is private and cannot be shared, VFL is essential. Additionally, route identifiers - starting and ending GPS locations - can only be fuzzily linked, but linking closely related source and destination points with multi-party fuzzy VFL can significantly improve prediction accuracy.
MW2. The 13% improvement comes from comparing FeT to FeT without dynamic masking, as shown in Table 2 of Appendix C.1.
MW3. We will correct this formulation in the revision.
**References**
[1] Wu et al. "A coupled design of exploiting record similarity for practical vertical federated learning." NeurIPS 22.
[2] Nock et al. "The impact of record linkage on learning from feature partitioned data." ICML 21.
[3] Hardy et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv, 17.
[4] Nock et al. Entity resolution and federated learning get a federated resolution. arXiv, 18.
[5] Wu et al. "Practical vertical federated learning with unsupervised representation learning." IEEE Transactions on Big Data, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses to address my concerns. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their thoughtful efforts in reviewing our manuscript and providing valuable feedback. In response, we have made substantial revisions and added new experiments, visualizations, and real-world applications to the "rebuttal.pdf," including Figures 11-15 and Table 11. Below is a summary of the contents and key findings for each figure and table:
- **Figure 11**: Visualization of learned dynamic masks for different samples. Each figure displays one sample (red star) from the primary party, fuzzily linked with 4,900 samples (circles) from 49 secondary parties. The position indicates the sample's identifier, and colors reflect the learned dynamic mask values, with larger mask values indicating higher importance in the attention layers.
- **Findings**:
1. Dynamic masking effectively focuses on a localized area around the primary party's identifiers. Data records with distant identifiers on secondary parties (shown in cooler colors) receive small negative mask values, reducing their significance in the attention layers - without accessing the primary party's original identifiers.
2. The focus area varies in scale and direction across samples: for example, the left figure concentrates on a small bottom area, the middle figure on a small top area, and the right figure on a broad area in all directions.
- **Figure 12**: Analysis of the effect of identifier fuzziness, showing the scale of Gaussian noise added to precisely matched identifiers.
- **Findings**: FeT consistently outperforms baselines at moderate fuzzy scales. When identifiers are very noisy, FeT's performance approaches that of Solo. Conversely, when identifiers are highly accurate, FeT's performance approaches that of Top1Sim.
- **Figure 13**: Performance on VertiBench MNIST datasets with varying levels of imbalance ($\alpha \in [0.1, 50]$). A larger $\alpha$ indicates a more balanced feature distribution.
- **Findings**: Both FeT and baseline models show improved performance in more balanced scenarios. FeT consistently demonstrates better accuracy than baselines across varying levels of heterogeneity.
- **Table 11**: Comparison of training efficiency on an RTX3090 GPU (batch size 128). PE: positional encoding; DM: dynamic masking.
- **Findings**:
1. **Parameter Efficiency**: FeT has a comparable or even smaller number of parameters than FedSim, suggesting that its performance improvement is due to model design rather than an increase in parameters.
2. **Memory Efficiency**: FeT is significantly more memory efficient than FedSim, consuming only 20-39% of the memory. However, this efficiency comes at the cost of training speed, as FeT performs neighbor search in parallel during real-time training. In contrast, FedSim requires hours to link top-K neighbors and preloads all linked neighbors into GPU memory, resulting in longer linkage times and higher memory usage.
3. **Overhead of New Components**: The additional components in FeT - dynamic masking and positional encoding - add very few extra parameters (1k - 0.4M), resulting in negligible additional computational cost (0-5 seconds per epoch slowdown).
- **Figure 14**: Impact of varying the number of neighbors ($K$) on FeT performance.
- **Findings**: FeT consistently outperforms all baselines, even when $K > 50$, which involves many unrelated data records in the linkage process. When $K$ is small, the performace of FeT may decrease for the lack of infomation, the similar reason for the low performanae of Top1Sim. This demonstrates the ablity of FeT to filter out unrelated records in large $K$ scenarios.
- **Figure 15**: Real-world application of fuzzy multi-party VFL for travel cost prediction in a city.
- **Application**: This scenario involves predicting travel costs through collaboration among companies managing taxis, cars, bikes, and buses. Since personal travel information is private and cannot be shared across companies, the identifiers for each route - starting and ending GPS locations - can only be linked fuzzily. Nevertheless, linking routes with closely related source and destination points significantly improves prediction accuracy.
In the detailed response to each reviewer, we use abbreviation to refer to each comment due to the charater contraint. Specifically,
- Wi: Weakness i
- MWi: Minor Weakness i
- Qi: Question i
- Li: Limitation i
Some concerns in the review are tightly related (e.g., Weakness 1 and Limitation 6), thus we have combined them (e.g., W1, L6) in our response.
In our response to each reviewer, we believe we have addressed all concerns and would be grateful if you could consider adjusting your rating if you find our revisions satisfactory.
Pdf: /pdf/2185e8e441d3bc5f8b4c862eafcde2582527c819.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Federated Black-Box Adaptation for Semantic Segmentation | Accept (poster) | Summary: The authors proposed a black-box tuning method to address the problem of Semantic Segmentation in FL. Specifically, they propose to split the network into two parts and store them in server and clients separately. The server modules are optimized via first-order optimization while the client modules are optimized via black-box optimization. The proposed method achieves promising results on different benchmarks.
Strengths: 1. The writting is clear and the presentation of the method is easy to follow.
2. The proposed method of combining split learning and black-box optimization is interesting.
3. The experiments are thorough and the results of the method are promising.
Weaknesses: 1. From my understanding, the privacy problems still exist. Since the authors claimed that they are using a two-layer CNN at the client, which is relatively light-weight and compact compared to the network used at server, the attackers at server may still be possible to reproduce the training data using the intermediate output (the uploaded values from each client). The authors may need to add further discussion about this issue.
2. In the process 1b (Figure 2), how is the first-order gradient computed at the server? Are the ground-truth semantic maps uploaded to the central server? Or the network predictions will be downloaded to different clients and then evaluated? For the first solution, there will be privacy threatens. For the second one, the communication costs will be very large. The authors should clarify this in more details.
3. The authors introduce a new two-layer CNN at each client, could the network DeepLabv3 (the network at central server) directly be split in the clients and server?
4. It would be interesting to compare with other FL algorithms which also applies Black-box Tuning such as [1].
[1] Sun J, Xu Z, Yin H, et al. Fedbpt: Efficient federated black-box prompt tuning for large language models[J]. arXiv preprint arXiv:2310.01467, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the questions in the Weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors did not address the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments.
1) Reproduction of training data:
a) The clients can arbitrarily choose their networks. In the current setting, there is no dependence between the clients or the client and the server other than the output size of the client's last layer. The current work proposed a two-layer CNN as an example network.
b) Even for a two-layer network, lets say an attacker knows the intermediate representations. Then the task for the attacker would be to find the input x and the function f given y where f is a small network and y is the leaked intermediate output. Now, to find out x, one could solve the equations y = f(x) and y’ = f’(x), where f’ denotes the derivative with respect to x. However, this method requires gradients to be known as well, which is not the case with out method.
c) Furthermore, one method could be to learn a new network to produce the same outputs as the client models, given some input. In this case, the attacker would initialize a new two layer CNN and would have to query the original client model with learnable inputs. This way, they could train the new model with the client outputs as ground truths. However, for this case as well, the attacker would require access to the actual client models.
2) Sending Masks to the Sever: In the proposed work, masks are sent to the server for FOO. Even with this information, the attacker would require raw data to generate a new model that can emulate the client without gradients. For example, consider stable diffusion or similar methods that can recreate images given their masks. These would be based on public datasets and hence, would not be able to generate PII information that will be present in the raw data. In fact, given the mask, they can still generate synthetic data, but that cannot be considered as replicating raw data since it would still use a distribution of pixels similar to public datasets and not the private client data. We believe that not requiring model information transfer and gradient transfer is an important step in the direction of better privacy-preserving FL over existing methods.
3) Directly Splitting the Server Model: The network at server could also be split into two parts as mentioned by the reviewer. While this is a valid strategy, this would limit all the clients to use the same network. At the same time, the server would potentially have the complete model architecture information, which is not required in the current setting. The motivation behind proposing a new lightweight client model was that this would allow different clients to have their own design choices for the model with the condition that the output should have a particular dimension. Since the client is updated using ZOO, having a lightweight model might also help in getting a better performance.
4) FedBPT as added black-box method: We thank the reviewer for pointing out the FedBPT method. We will add it in the related work section. Please note that FedBPT was proposed to work with foundation models and as a prompting mechanism. Hence, it uses CMA-ES, a ZOO method to learn a prompt for the pretrained foundation model at each client and aggregates these prompts at the server. Since we want to learn the network parameters themselves and not the prompts, we cannot directly compare with this method. Instead, for the experiments, we tried replacing SPSA-GC with the CMA-ES method for updating the client in our algorithm. However, we see that it kills the program due to the large number of parameters in the network as compared to prompts. CMA-ES is an evolutionary algorithm that maintains a big group of candidate values for each trainable parameter, that is perturbed towards a better loss value in each iteration. Since the number of parameters is very large, we were not able to run CMA-ES as the ZOO algorithm.
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you
---
Rebuttal Comment 2.1:
Comment: During the rebuttal, the authors have addressed most of my concerns. Therefore, I would increase my score to Broadline Accept. | Summary: This manuscript introduces a federated learning framework for semantic segmentation that neither requires knowledge of the model architecture nor involves transferring gradients, thereby preventing privacy leakage. BlackFed incorporates split neural networks and first/zero order optimization for training the server and clients, respectively.
Strengths: (1) The writing of the paper is clear.
(2) The concept presented in this manuscript is interesting.
Weaknesses: (1) In addressing privacy leakage caused by sharing weights and gradients in current federated learning methods, this manuscript proposes BlackFed, which iteratively updates client and server networks. However, this approach involves transmitting extracted features and labels between the client and server, leading to privacy leakage and security risks. For instance, some algorithms like stable diffusion [1] can reconstruct client images from these features and segmentation masks. Therefore, I believe BlackFed does not effectively protect privacy and fails to show superiority over methods based on network weights and gradients.
(2) There are too few comparison methods. In the Related Work section, many federated learning methods for segmentation tasks are introduced; FedSeg and FedSM could be used for comparison.
(3) BlackFed preprocesses client data using a two-layer convolutional neural network on clients, allowing different clients to extract global features. These preprocessed features are then sent to the server for segmentation. The client-side training follows the method proposed in [2], which limits the approach's innovation.
(4) In the inference phase, the features extracted by the client network need to be uploaded to the server each time. Does this significantly increase testing time consumption?
(5) FedPer [3] is another federated learning method using split neural networks. It is recommended to introduce it in the Related Work section and add comparison results in the experimental section.
[1] High-Resolution Image Synthesis with Latent Diffusion Models
[2] Blackvip: Black-box visual prompting for robust transfer learning.
[3] Federated Learning with Personalization Layers
Technical Quality: 2
Clarity: 3
Questions for Authors: (1) More comparison of Federated learning methods for segmentation should be performed;
(2) All experiments were conducted using DeepLabV3 as the server segmentation network. It is recommended to include experimental results on different segmentation architectures.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The motivation behind BlackFed is privacy protection. However, the feature maps extracted by clients and segmentation masks are also transmitted to the server, making them vulnerable to attacks that could reconstruct the images from different clients. Therefore, this method does not achieve the goal of privacy protection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments.
1) Reproduction of Training data: As the reviewer suggested, the intermediate representations and segmentation masks can be used along with diffusion models. However, to train these models, one would still require the raw data to act as ground truth labels during training. If one were to use public datasets for training such diffusion models, the diffusion model would still learn to generate images from a similar distribution to the public datasets. This is not the same as regenerating raw data since raw data can have personally identifiable information (PII) that won't be learnt by diffusion models which are trained on public datasets. In such scenarios, in fact, the output from Stable Diffusion can be considered as a good source for synthetic data, which looks similar to raw data but would arise from the public data distribution. In fact, some recent works also highlight the property of stable diffusion and other image generation methods to copy the data they were trained on [1-4]. In such a scenario, in the absence of raw data from the client, the best a diffusion model would be able to do is to generate images from public datasets similar to the segmentation mask, which we think cannot be considered as reproduction of client data. We believe that not requiring model information transfer and gradient transfer is an important step in the direction of better privacy-preserving FL over existing methods.
[1] Frame by Familiar Frame: Understanding Replication in Video Diffusion Models
[2] Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
[3] Analyzing bias in diffusion-based face generation models
[4] Replication in Visual Diffusion Models: A Survey and Outlook
2) Adding more Comparisons: FedSeg is a method that is built upon FedAvg and works by sharing model weights across server and client. Hence, this would act as an upper bound for our method, just like FedAvg. We verify this for CAMVID and Cityscapes and add it to Table 1 presented in the extra table page for rebuttal. We see that for both the datasets, it generally performs better than FedAvg. Please note that this cannot be directly compared with BlackFed since our approach operates in a more restricted setting where model sharing is not allowed. We also add other methods like FedPer in this table, as suggested.
3) Novelty for ZOO: Please note that we do not claim novelty for the ZOO method used. In fact, we cited the BlackVIP paper as being the work that introduced SPSA-GC as a ZOO method (ref. No. 38 in the paper). The major contributions of our work include formulation of the FL problem in the blackbox setting using split networks to allow gradient-free updates. The proposed approach formulates the FL problem using a lightweight client and a parameter-heavy server, along with a round-robin algorithm that can allow SPSA-GC to work well, since it was originally meant to fine-tune pre-initialized foundation models. The proposed approach shows that by alternating between clients, and updating them with ZOO and FOO, one can achieve good performance. In addition, we identify the catastrophic forgetting problem and come up with a solution using hashmaps.
4) Inference: During inference, uploading the features to server would increase time as pointed out. Some ways to reduce this would be to upload batches of features at the same time that can reduce the amortized time cost. An alternative is to get a copy of the server initialized with the hashmap weights for the client after training is complete. Then, client can perform the inference locally. This assumes that the client has sufficient compute power. Note that the entire hashmap need not be shared, but only the entry corresponding to the client.
5) Additional Method FedPer: We thank the reviewer for pointing out this method. We will add it to the related work and comparisons in the revised paper. In addition, we added results with FedPer in Table 1 in the extra rebuttal page. However, FedPer does not use split networks. Instead, it is similar to FedAvg, the difference being that not all weights are shared with the server. Here, the majority of the weights are shared with the server and aggregated. The remaining weights are "personal" to the model and allow for better local performance. These are not shared with the server. In contrast, there are no weights shared between the client and server in Blackfed. The output from the client is given to the server, where it is further processed, making this a split network. Furthermore, The FedPer paper only performs the task of classification. In order to adapt this for segmentation, we considered the weights of the classifier head of DeepLab v3 as personal weights and the rest of the backbone weights were averaged in the server. We find that in most cases, this improved local performance over FedAvg, but reduced OOD performance.
6) Ablations with more architectures: Table 3 in the paper compares performance with three different architectures for the server. This includes Unext and Segformer in addition to DeepLab. Segformer is a transformer-based method while the others are CNN-based.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal. The majority of my concerns have been addressed. However, the explanation of BlackFed's privacy protection is not convincing to me, so I will keep my rating unchanged.
---
Reply to Comment 1.1.1:
Comment: We are happy that we could address most of your concerns. For the privacy protection concern, please note that we do not claim a perfectly attack-proof method in this work. In lines 42-52 of the main paper, we reference works that propose attacks using the existing FL frameworks. Hence, we propose a mechanism which would not satisfy the necessary conditions of gradient transfer / model transfer for these attacks. In the contributions section from lines 53-60, we clearly state that we propose a new framework for FL without gradient and model transfer, and do not claim that it would solve the privacy preserving problem completely.
However, we will be adding a discussion on this in the future work section that can encourage more research on attacks and defenses given the new framework. We believe that our work is an important step in the direction of FL algorithms which involve minimal transfer and it will encourage future research in this direction, diffusion-based attacks and defense being one such example.
We hope that you would consider raising the score if we address the present concern.
Thanks
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you | Summary: In this work, the authors introduce BlackFed, an FL algorithm that enables distributed learning without transfer of gradients or model weights. This characteristic distinguishes the approach as a black-box model compared to existing FL methods, which can be considered as white-box approaches. Recent research on attacking FL methods require the knowledge of either the gradients or the model information, thus rendering BlackFed more resistant to such attacks since gradients or weights are not shared. BlackFed consists of a global server and multiple clients, each possessing their own data. The server is using first order optimization while the client weights are updated using zero order optimization in a round-robin fashion. This introduces the effect of catastrophic forgetting in the network, for which the authors propose a simple hashmap-based approach. The white-box methods, despite being a black-box method itself. Extensive experimentation on the natural and medical domain datasets highlights the effectiveness of BlackFed.
Strengths: In this work, a new approach, named BlackFed, is proposed. For segmentation, it uses FL that does not involve gradient transfer between the server and the client and at the same time, it passes no knowledge about the client model architecture to the server, thereby avoiding the necessary conditions for these attacks.
The strengths is as follows:
1. BlackFed - a black-box algorithm that facilitates distributed learning for semantic segmentation without transferring model information or gradients between the client and the server is proposed. The authors formulate the FL problem using split-nn and use first and zero order optimization for training the server and the clients, respectively.
2. To reduce the effect of catastrophic forgetting, the authors propose a simple additional step during training. After updating the server weights for a given client during training, the updated weights of the server model are stored in a hashmap indexed by the index of the client. During inference for a given client, the latest weights of the client model and the indexed weights of the server model to perform the forward pass are used.
3. The proposed approach is evaluated on four segmentation datasets and it shows it’s effective as a distributed learning method by showing improvements over individual training.
Weaknesses: The weakness is as follows
1. For Table 1, it's indeed the proposed methods work well. But there is not experimental result for deeplabv3 on a single machine. Deeplab v3 achieves 80.0 on Cityscape val dataset on https://paperswithcode.com/lib/detectron2/deeplabv3-1. The paper generates 18 clients for Cityscapes. What's the relationship between the Table 1 evaluation result and benchmark result on Cityscape? This may make the paper not convincing enough.
2. It's unclear why "As the model complexity increases from UNext to DeepLab to Segformer, we observe a decrease in individual training performance. ".
3. The image size for Cityscape is 256 × 512. why using so small resolution? it needs to do more experiments on higher resolution.
4. What's the details for " Consequently, we start the DeepLabv3 network in the server from the second layer, which expects a 64-channel input. " ? Does it mean the deeplabv3 trained on server only unfreezed from the second layer? so how about the client?
Technical Quality: 3
Clarity: 3
Questions for Authors: No. Please see Weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No. Please see Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments.
1) Comparison with the baseline from papers with code: The dataset of Cityscapes has data from 18 different centers. In the original splits of Cityscapes, the training data contains images from some of these centers, while the data from rest of the centers is in the validation and testing sets. In our case, we want each center to have its own training, validation, and testing data. The goal of the Federated Learning model would be to do well on test data from other centers in addition to doing well on the in-house test set, indicated by the “OOD” and “Local” columns in the tables. Hence, the data splits are done differently. Training DeepLab v3 on single machine is the same as “Combined training” in Table 1, where all the training data is merged to train a single model, that achieves a DSC of 0.77, which is close to the number shown on PapersWithCode. This represents an upper bound to what BlackFed can achieve since training on a single machine means that it has access to data from all centers.
2) Clarification on Trend of Model Complexity vs Performance: The model complexity includes the number of trainable parameters which increase from UNext to DeepLab to Segformer. But in table 3, row 1, we see that the individual performance increased for DeepLab but decreased drastically for Segformer, even though it has the most number of parameters. Since the data is limited in this case, we see that increasing the number of trainable parameters beyond a certain point causes the validation performance to suffer. However, for the case of combined training, the data quantity is higher, so val performance increases from UNext to DeepLab v3 to Segformer as expected. This is the desired trend in the FL algorithm as well since it also has access to more data than the limited data case of individual training. This trend is seen in BlackFed as the performance increases from Unext to DeepLab and does not drop when the model complexity increases from DeepLab to Segformer.
3) Reason for Lower Resolution: We wanted to do the training of a given client on a single GPU, since institutes like medical centers do not have a high compute power. Hence, we downscale the image without affecting its aspect ratio. Training on a larger resolution would require much higher computation. Since we are emulating all client and server computations in the same compute center, this would require increased compute, especially in the case of 18 centers like Cityscapes. However, we plan to release a model zoo on Github with pretrained checkpoints at different resolutions since training with larger resolutions can improve performance.
4) Clarification on Server Architecture: All the parameters in the approach are trainable and there are no frozen layers. In our experimental setup, we use a two-layer Convolution Net in the client to reduce the load on each client. The output of this layer is a feature map of the shape H X W X 64. This is transmitted to the server for further processing. Hence, the first layer of the server architecture should expect an input with this shape. We are using DeepLab v3 architecture for the server. However, instead of starting from the Conv1 layer in DeepLab, we are starting from Conv2 (and deleting Conv 1 from the network), which also expects an input with 64 channels. Hence, in order to continue the training in the server from the client's output, we designed the server architecture in such a way. In other words, this can be considered as one network, the first two layers of which are in the client while the rest of it is in the server, hence the term “split network”.
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you
---
Rebuttal Comment 2.1:
Comment: Since we are approaching the end of the discussion period, this is a gentle reminder. Please let us know if you have additional questions so that we can address them in the next day during the discussion period.
We hope you would consider raising the score if we have addressed your concerns satisfactory
Thanks
---
Reply to Comment 2.1.1:
Title: Reminder for the discussion
Comment: Dear reviewer,
Today is the last day for the discussion period. Please let us know of we were able to address all the concerns regarding the paper. If so, kindly consider raising the score. | Summary: A common issue now with federated learning systems is that they are not completely private owing to gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end.
The paper proposed a workaround to this by removing the need for the passage of the gradients from client to server or knowing the model at the end. For the gradients, the authors propose zero-order optimization (ZOO) to update the client model weights and first-order optimization (FOO) to update the server weights.
While achieving the above, they also perform reasonably well on most datasets.
Strengths: 1. An important direction of research where the federated framework does not require gradient transfer from client to server and an approach to tackle catastrophic forgetting in that framework.
2. Comparable results to gradient accessed methods across all datasets.
3. Significant reduction in training costs as compared to previous methods.
Weaknesses: 1. The case of polypgen (given the dataset size, medical background, and criteria of data collection) is where there is the most requirement/use of such a setup. And it underperforms over there. "This behavior may be related to the data distribution of Polypgen and suggests that BlackFedv2 is not able to correctly avoid the catastrophic forgetting for centers C5 and C6. ". --- I think this has less to do with catastrophic forgetting and more to do with the model just not able learn. The data points are very few across all the centers. Given how the latest weights are stored, the weights used are just poor as they are not able to learn appropriately given the tough case of low data centers. It may also be that it is overparameterised by the other centres. If it was the case of it being affected by not being able to solve catastrophic forgetting, this would have been reflected everywhere. Especially in C1 C2 of CAMVID.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why was the white-box training so subpar in ISIC? Further, while I am not well-versed in the literature, I imagine there are ways to tackle catastrophic forgetting in the white-box method as well. Would it be possible to do an ablation with that? Comparing V1 and white box, apart from ISIC, it beats V1 in all cases.
2. Given that CAMVID has a Significantly lesser number of classes and centers, why is the performance better/on-par (and more stable) in the Cityscape dataset for BlackFed but worse in the case of "individual"? Even considering the majority classes.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: 1. "After updating the server weights for a given client during training, we store the updated weights of the server model in a hashmap". - This step may increase significantly with scale, especially with multiple client updates. Lower floating point operations are indeed handy, but this is coming at the cost of high memory requirements.
2. Issues with performance in the most important setting.
Suggestion -
3. I think in Table 1 and 2, Bolding the highest value and underlining the highest value without gradients will be a more proper representation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments.
1) Low Performance on Polypgen: This case indeed serves as an important application area. Here, for 4 out of 6 centers (C1, C2, C4 and C5), it is beneficial to use the FL method. But for C3 and C6, the OOD performance decreases. This could be due to other centers dominating the training, thus causing catastrophic forgetting for particular centers like C6. But as pointed out by the reviewer, this should have been a problem elsewhere as well. In such cases, maybe the learnt weights are not good due to a more adverse distribution shift between centers in Polypgen as compared to CAMVID. In such cases, it would be important to control the training of the FL algorithm to make it more robust, which is an interesting and important direction for future research in this area.
2) Counter-Intuitive Performance of Whitebox on ISIC: As pointed out by the reviewer, for ISIC, the whitebox method strangely performs subpar. We verified this through various runs and changes to the hyperparameters. This anomalous behaviour may be due to the model getting stuck in some local minima during training because of overparametrization.
3) Comparing Individual Performance for CAMVID and Cityscapes: In the Blackfed case, for both datasets, the server model benefits from the entire dataset, hence the simliar performance. However, in the individual case, the model performs subpar in some of the clients. For CAMVID, individual performance of C1 is much lower than others. But for Cityscapes, there are some clients with a good performance and some clients with subpar performance. Hence, since the number of clients is more, on an average, the indivual performance is lower. This also depends on the choice of model architecture and data distribution since the number of data points is so less.
4) Additional Memory Overhead: As the reviewer pointed out, the proposed approach comes with an added memory overhead for storing the hashmap, which scales with the number of clients. However, the motivation for the method was based on the assumption that while the client has limited memory, the common server can have a large memory. Based on the model architecture, each entry in the hashmap would be of the order of 100 MB or less. If there are 100 clients, this would translate to 10 GB, which is a realistic number. For larger models, we believe that the memory overhead cost would not be a bottleneck in comparison to the cost of training such models. One interesting direction in such cases would be using Parameter Efficient Finetuning Methods (PEFT) like adapters or LoRA and only storing them.
---
Rebuttal 2:
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you
---
Rebuttal Comment 2.1:
Comment: I would chose not to change my decision (for better) based on the fact that - 1. Thinking of models in terms of 100MB is not fair. No production model is going to be that small.
2. “ Hence, since the number of clients is more, on an average, the individual performance is lower.” - This is a fundamental problem which again is not being addressed in any form.
The results overall are quite normal. However, given the novelty to form a gradient-free approach is why I proposed the accept and that is the only contributing factor.
Thank you for the detailed rebuttal. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments. We tried to address all concerns in the respective rebuttal sections. Here, we would like to write the global rebuttal for two common concerns raised by the reviewers:
1) More comparison experiments: We added two new results in Table 1 of the paper (shown in the attached pdf file) for the suggested methods FedPer [1] and FedSeg [2]. Both of these methods involve model weights transfer similar to FedAvg and so can be considered as upper bounds for our method.
(a) In the case of FedPer, the majority of the weights are shared with the server and a small head is retained at the client to allow for more personalization of client models. We see that this improves local performance but is not able to perform as well on OOD data. While this is defined for classification, for comparison, we adapt it for segmentation by keeping the final classifier head of our network 'personal' to the model and don't share its weights with the server. We find that our method comes on par with this method on OOD distribution, despite being a black box method not involving model weight transfer.
(b) In the case of FedSeg, all the weights are shared with the server similar to FedAvg. During the training of the client, an additional step is taken to align the behaviour of the model for OOD and local distributions better. This method was introduced as an improvement to FedSeg. Our method performs on par with FedSeg without sharing model weights.
(c) One of the reviewers suggested a comparison with FedBPT [3], which uses the ZOO method called CMA-ES for learning a prompt to foundation models. This prompt is shared between the client and the server while the same frozen foundation model is used at all clients and the server. CMA-ES is an evolutionary method and hence, requires lots of candidates for each of the learnable parameters. Over several iterations, these candidates converge to an optimal value for the parameter. In FedBPT, the learnable prompt is very tiny compared to the client part of the network in our case in terms of learnable parameters. Hence, when we used CMA-ES instead of SPSA-GC as the zero-order optimization method, it always crashed the program. Hence, we believe that CMA-ES would not be a feasible solution for optimizing a larger number of parameters.
[1] Federated Learning with Personalization Layers
[2] FedSeg: Class-Heterogeneous Federated Learning for Semantic Segmentation
[3] Fedbpt: Efficient federated black-box prompt tuning for large language models
2) Discussion on regeneration of training data: The reviewers had concerns about whether attackers could target the method with existing methods like Diffusion Models or other mechanisms. We present our views on this topic below:
(a) For attacking methods that generate new models to approximate raw data with the lowest error, an optimization problem is solved. Here, the goal would be to find input x that minimizes the error between the predicted value of the original model and the predicted value of the new model. However, these methods would either require the gradients or the ability to query the original client model multiple times, both of which are not available in our proposed method.
(b) Diffusion models present a more interesting challenge since methods like Stable Diffusion can generate synthetic images based on the shared mask and labels. However, various works have found that these Image Generation methods mimic the data distribution that they are trained on [1-4]. Hence, even if such methods are used to attack BlackFed, they would generate data with pixel distribution from existing public datasets and lack the Personally Identifiable Information (PII) in the raw client data, which is private and not accessible to the diffusion model for training. In such a scenario, while the generated synthetic images would be valid data, they should not be considered as replicating the raw input data.
[1] Frame by Familiar Frame: Understanding Replication in Video Diffusion Models
[2] Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
[3] Analyzing bias in diffusion-based face generation models
[4] Replication in Visual Diffusion Models: A Survey and Outlook
Pdf: /pdf/d8e56c3e4633a43a4567bac151039368947753dc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad | Accept (poster) | Summary: The paper contributes a new metric for measuring the difficulty of a task for a transformer model to learn, called distribution locality. This refers to the amount of an input x that needs to be seen by a model for it to be able to return the corresponding output y. A high locality means that a large amount of the input would need to be made available for the model to correctly predict y, while a low locality means that only some small part of x needs to be given to the model.
This represents a barrier in the kind of tasks that can be learned by transformer models, which the authors call the locality barrier. Above a certain threshold of locality a task becomes impossible for a given transformer to learn.
To overcome the locality barrier, the authors propose various scratchpads as tools that the model can use in getting to its answer. They demonstrate that naive and educated models are insufficient to overcome the locality barrier, but provide a new type of scratchpad called an inductive scratchpad which is able to overcome the locality barrier.
The scratchpads in the paper are essentially structures that the model trains on that help to format the answer in a way that helps the model arrive at the correct answer. During training the model is given input and output pairs where the output is structured according to a given scratchpad. The output of the model essentially consists only of the scratchpad workings and the answer. Each scratchpad type works by enforcing a certain structure on the steps taken during the working out process.
The inductive scratch pad works by trying to solve a problem at each step through iteratively applying the same learned rule repeatedly until an answer is found. The key is that each step is based only on the previous state and the original question. By applying this problem solving solution over and over again it is possible for it to learn to solve problems outside of its original training distribution.
Strengths: This paper offers many strong theoretical aspects and tackles some of the fundamental problems underlying transformers and cutting edge AI systems.
Originality:
The paper defines a useful and coherent metric for approximating the limits of learning tasks that a model is capable of, in distribution locality, and the mathematics underlying this idea seem sound. This idea is an intuitive explanation of the limits of transformers and is useful in predicting future model capabilities.
The idea of the inductive scratchpad, training the model in ways that inherently include a reasoning step, is a novel idea, and seems broadly applicable to many applications of language models.
Quality:
The level of thinking displayed in the development of these ideas is very impressive, and shows a strong grasp of the problems at the heart of the field. The solution to the problem is both elegant, and argued convincingly, though applications at larger scales of models would have been very interesting to see.
Clarity:
The method of demonstrating the problem of distribution locality helps to make the problem clear, and the mathematics describing it are very clear and crisp. The tests and results presented are also a clear indication of the efficacy of the solution.
Significance:
As mentioned earlier, the problem being approached here is a fairly fundamental one in the domain of AI and of transformers in particular, and if the inductive scratchpad technique is able to generalize to other kinds of models may have a large impact on the way in which future transformer models are trained and can help with fundamental issues of memorisation vs generalization in model training.
Weaknesses: The paper has good ideas but the language used in describing the ideas is often slightly awkward.
For instance we have this sentence: “This takes place for instance in the last 2 examples (parity and cycle task) where it appears useful to learn inductive steps when possible.” This sentence is awkwardly constructed because it is not immediately clear what “this” is referring to, and it feels roundabout in the way it makes its point.
An alternative construction could read:
“The parity and cycle task examples demonstrate how inductive steps can be useful in solving problems, making them illustrative candidates for the inductive scratchpad.”
There are many such instances in the paper, and I think focusing on clear sentence construction would significantly improve the readability of the paper.
The examples given for the scratchpad outputs are very difficult to parse visually, and use different symbols between tasks which makes them hard to compare. If the components of the task were clearly broken down per example this would make it much easier to read, or if the same notation was used across examples. Potentially more human readable examples could have been provided alongside the technical outputs.
The use of some real world tasks with the different scratchpads would have significantly strengthened the claims of the authors. While the examples given are clear indicators of the efficacy of the inductive scratchpad it is still somewhat difficult to infer that the solution will scale.
It would have been useful to compare the inductive scratchpad to other state of the art capabilities for extending the reasoning of transformer models to give a better sense of the scope of the problem and the impact of the solution.
The paper also doesn’t offer clear implementation steps for how the inductive scratchpad might be implemented in other types of problems. The implementation as presented here seems quite tricky to apply to other common transformer applications.
Technical Quality: 4
Clarity: 2
Questions for Authors: To what extent would training other transformer models be possible with this approach? For example, could this technique feasibly be used in language models? Some mention is given to various algorithmic tasks, but it is unclear to what extent these techniques are bottlenecked by current methods, and some discussion of how broader application may be done may help strengthen the paper. Particularly discussing scalability.
Does this approach generalise to other architectures at all?
How much does this approach impact compute demands if at all?
Can you provide a step by step method for implementing this in other kinds of problems?
Can you provide a more fleshed out discussion of impacts on the broader field and applicability to other problems?
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The discussion of limitations is fairly clear in the paper discussing the fact that it primarily only applies to transformers, the limits of its generalization, and the fact that applying it to other problems is left as future work.
However, discussion of implementation is not explicitly mentioned, and could be included to improve the discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer’s feedback on the writing of the paper, we will revise the paper accordingly to enhance its readability. In particular, we will use different colors alongside annotations for the inductive scratchpad examples to make reading and parsing them easier.
Further, note that we include the details of the implementation of the inductive scratchpad in Appendix C.2; does the reviewer suggest that we should move this to the main text? This may be challenging with the available space.
Please read the answers to the questions below.
> Q. How would the inductive scratchpad scale? What’s the impact on the compute demands?
The inductive scratchpad approach can scale easily to larger models, datasets, and context lengths as it reduces the attention computations and the *effective* context size. Assume, we have states s\[0\], …, s\[k\] in the inductive scratchpad. The inductive scratchpad enforces s\[i\] to only use the information of s\[i-1\]. This behavior can be either implemented by masking the attention between s\[i\] and all the previous states except s\[i-1\] or by trimming the states prior to s\[i-1\] manually. As a result, the inductive scratchpad actually reduces the ‘effective’ context size. Thus, this method can easily scale. Note that due to the reduced effective context length, one can also use less memory during both training and inference. We have already discussed the implementation of the inductive scratchpad and its efficiency in Appendix C.2 and we will further extend it focusing on its scalability.
> Q. Does this approach generalize to other architectures and language models?
Yes, this approach is easily implementable for other variants of Transformers and language models. First, note that the inductive behavior is controlled by two special tokens: the \<START\> and \<STATE\> (denoted \#) tokens. Thus, for inputs that do not require induction (such as normal text generation) the model would keep its default behavior (as it would not generate \<START\>/\<STATE\> tokens). Regarding the implementation, assume that we have states s\[0\], …, s\[k\]. As discussed in Appendix C.2, there are two methods for implementing the inductive scratchpad. The first approach is based on masking the attention between s\[i\] and all previous states except s\[i-1\] and reindexing the positions of the tokens of s\[i-1\] and s\[i\] as if s\[i-1\] was generated right after the question. This implementation would only require attention masking and reindexing of positions of tokens which is easily doable in all variants of Transformers (e.g., whether they use absolute or relative positional embedding). The second approach is also implemented by manually removing the states prior to s\[i-1\] for generating s\[i\] before giving the text to the model during inference and splitting the input into pairs of states (s\[i-1\]\#s\[i\]) during training. The second implementation method only modifies the input/training data of the Transformer prior to feeding it to the model (for generation this is in the outer loop that calls the model for generating one token at a time) and hence would work with any Transformer model.
Finally, note that our current implementation is for the decoder-only Transformers as they are more common nowadays. Nevertheless, encoder-decoder Transformers can be handled in the same fashion. The only difference is that in the encoder-decoder models, the question (Q) is already separated from the rest of the tokens as it is in the encoder part of the architecture, this would make the implementation slightly easier as we can consider removing the \<START\> token.
> Q. Can you provide a step by step method for implementing this in other kinds of problems?
As a large class, here we consider the algorithms that have a loop in which they update some variables/arrays until the answer is computed. To design an inductive scratchpad for such algorithms, we first put the input data of the question (e.g., a graph) and all other constant and permanent information before the \<START\> token so that the Transformer attends to this information in all of the generation steps. Afterward, we define the states of the inductive scratchpad to be all of the variables that are being updated in the loop of the algorithm (including the loop counters). Now, generating the next step/state in the scratchpad becomes equivalent to one iteration in the loop. For example for the cycles task, we keep the graph input as the permanent info in the question part before the \<START\> token. After that in each state, we keep track of the node that we are visiting in our search, this is equivalent to having a variable that keeps track of the current node of the search in a for loop. Also, the Transformer learns the termination condition of this loop which is reaching the source/query vertex. Similarly, for the parity/addition task (random space method), we keep the input as the permanent part before the \<START\> token. In the states of the inductive scratchpad, we keep track of the location of the corresponding digit(s) and their values along with other variables like the current value of the parity, carry, and the addition result. This is again, similar to a for loop that updates these variables in each iteration. Note that the inductive scratchpads for addition use some minor adjustment in the inductive scratchpad format, i.e., the initial generation of a random answer. This part is simply added for the length generalization of our experiments and to prevent scenarios in which the Transformer has to generate tokens for unseen positions during training (note that we train our Transformers from scratch with absolute positional embedding). However, we expect one can skip such adjustment when using a pre-trained Transformer with proper relative positional embedding.
> Q. Broader application and comparison of the inductive scratchpad?
Please see the global response.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: I thank the authors for the careful responses to my questions. I hope that these can be used to clarify issues that I had questions about in the final version of the paper if it is to be accepted.
---
Rebuttal 2:
Comment: > Q. How does the inductive scratchpad compare to SOTA methods?
The length generalization for tasks such as parity and addition has been studied rather extensively recently and overall the reported improvements remain fairly modest. We note that all of the proposed schemes have some additional conditions for their inputs and/or solution designs that are specific for the tasks (e.g., tailored positional embeddings). Our work trains the Transformers from scratch and uses absolute positional embeddings, however, we also ask for a mild condition on the input: random spaces for parity and additions. Since each work has its own modifications and solution design, it is not trivial to compare just the performance. However if one wants to compare the performance with a rough description of the approach, it would look like this:
### Addition
work | performance | method and assumptions\
[1] | from 8 digits to 9 digits | Using NoPE (no positional embedding) \
[2] | from 10 digits to 12 digits | Scratchpad with recursive format\
[3] | from 40 digits to 50 digits | Reverse order of the output + ‘index hints’ (special tokens that go before each digit). An example looks like a5b4 + a3b7 = b1a9.\
[4] | from 5 digits to 15 digits | encoder-only model (no autoregressive generation) + relative positional embedding + padding inputs (input looks like 1 2 <PAD> + 3 9 <PAD>)\
[5] | form 40 digits to 60 digits | Fire relative positional embedding + randomized position encodings + reversed output + index hints (similar to the above input looks like a5b4 + a3b7)\
**Our random space method** | from 10 to 18 digits | Using random space in the input + inductive scratchpad. An input looks like 94\_+\_3\_\_1= (we use \_ instead of space for better readability). \
**Our shift method** | from 4 to 26 digits | Using a random text before each number + inductive scratchpad. An input looks like fs\\$46+ih\\$98.
### Parity
work | performance | method and assumptions\
[1] | from 8 bits to 12 bits | Using NoPE (no positional embedding)\
[3] | from 30 bits to 50 bits | Using scratchpad + ‘index hints’ (special tokens that go before each bit in the input). An example looks like a0b0c1d1e0 > +c-d+\
[6] | from 8 bits to 20 bits | Using pretrained large models (128B) + prompting + fine tuning + scratchpad\
**Our method** | from 30 bits to 55 bits | Using random spaces in the input + inductive scratchpad. An input looks like \_01\_10\_0\_\_1\_
*(reported values are for median of the seeds, some seeds do better than others)*
Therefore our solution requires one of the least stringent modifications of the inputs and provides a significant improvement of the length generalization compared to prior works.
[1] The Impact of Positional Encoding on Length Generalization in Transformers, Kazemnejad et al. 2023\
[2] Positional description matters for transformers arithmetic, Shen et al., 2023\
[3] What algorithms can transformers learn? a study in length generalization, Zhou et al., 2023\
[4] Length generalization in arithmetic transformers, Jelassi et al., 2023\
[5] Transformers Can Achieve Length Generalization But Not Robustly, Zhou et al., 2024\
[6] Exploring Length Generalization in Large Language Models, Anil et al., 2022 | Summary: The paper investigates the conditions for Transformers to learn algorithms with length-generalization. The paper proposes a formal definition of “distribution locality” and conjectures that the measure is highly correlated with the capability of Transformers to weakly learn. The (inverse of) conjecture is theoretically justified in one particular case by showing that a class of graph classification problem with high distribution locality cannot be weakly learned. The paper then shows that scratchpads that reduces distribution locality can improve length generalization and inductive scratchpads which removes previous intermediate states from the context can improve length generalization.
Strengths: The paper provides exact definitions for “Distribution Locality” as and states solid conjectures that connects between the measure and learnability. If proven, these conjectures can significantly advance our understanding of Transformers trained from scratch. The work also shows the connection between scratchpads/Chain of Thought and reduction of distribution locality, which could further our understanding on the success of CoT. The work presents a formal proof of an impossibility result for weakly learning Transformers, which to the best of my knowledge, is novel in the field since most prior works focused on limitations on expressivity.
Weaknesses: The theoretical result does not rigorously prove the “negative side” of the conjecture, since it’s unclear what properties of the graph classification task make the problem unable to be weakly learned. Further theoretical connection is require to show the connection between distribution locality and learnability.
The presentation of the paper could be improved to facilitate the clarity and understanding of the core results of the paper. In particular, in section 2, the authors should consider using shortened/less formal definitions and remarks and putting extended versions in separate sections or the appendix. For example:
Conjecture 1: Can be shortened using “if and only if” instead of having 2 sentences for “if” and “only if”?
Remark 2 seems loosely relevant to the main results of section 2 or the main flow of the paper. It is recommended to have a separate discussion section for these extensions and/or put them in the supplemental materials
Conjecture 2 is hard to understand on it’s own and is not explained from a higher-level perspective (i.e. exactly what is a agnostic scratch pad?). It’s recommended to provide a definition and it’s high-level explanation and shorten Conjecture 2 using the definition for better clarity
In Theorem 1, it’s unclear why are the label names a_i, b_i, c_i are necessary. It seems that the label names does not impact the correctness of the theorem and thus their descriptions can be removed for clarity of the theorem.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the graph classification task possible to be solved by a Transformer regardless of training procedure? In particular, does there exist a set of weights for a Transformer with log-precision, constant depth and heads that can solve the graph classification task for all n? This is quite important since the work is focused on the learnability of Transformers instead of expressiveness, which is greatly studied in prior works, but if the Transformers cannot express the task, then the learnability impossibility result seems trivial.
Additional comments:
The appendix section B repeatedly refers to $d_\text{emb}$ and seems to use $d_\text{emb}$ to denote the naximum length of the input string (i.e. prompt/question). However, $d_\text{emb}$ typically refers to the embedding dimension (i.e. number of entries in the embedding vector) of every token. Why is there a relationship between $d_\text{emb}$ and input lenth?
In appendix section B.2 “Length generalization for addition”, why is “First, we generate a random sequence of tokens with size $d_\text{emb}$ + 2 such that it begins with ‘$’, e.g., $xgwg6 we call this text ans[0]” useful and necessary?
Does “pointer” denote a separate pointer token for each possible input position? Are “[00]” one single token or 3/4 separate tokens? What does it mean by the value of the ith bit ”can be retrieved using the pointer”, is this specially processed in the generation process?
Why does pointer retrieval operation not break the distribution locality? It seems that for different pointers the model need to copy different positions in the input (the pointer position), but distribution locality requires that the positions that affect the label is the same for any possible input. Doesn’t retrieving the ith position break distribution locality?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The inductive scratchpad requires that each reasoning step is only dependent on one previous reasoning step and the input. However, this is less realistic in more complex reasoning tasks such as mathematical reasoning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We address the rest of the remarks and questions below.
> Q. Is the graph classification task possible to be solved by a Transformer regardless of training? This is quite important since the work is focused on the learnability of Transformers instead of expressiveness but if the Transformers cannot express the task, then the learnability impossibility seems trivial.
Yes it can express the task. As mentioned in the general comment, this is not a case where the failure comes from expressivity limitations. In the specific version of Theorem 1, the problem is equivalent to checking if a product of permutations in $S\_3$ is the identity, which is solvable by a constant depth transformer. The versions with a less restrictive formatting of the input are probably not expressible by a transformer with log-precision, constant depth and polynomial size. However, we do not restrict ourselves to this specific regime, and our definition of T-regular allows an arbitrary polynomial depth, and a logarithmic depth transformer can solve the graph classification task by finding the vertex $2$ edges from each vertex, then finding the vertex $4$ edges from each vertex, and then finding the vertex $8$ edges from each vertex and so on and then combining those to find the vertex $n$ edges from some designated starting vertex and checking if it is the same vertex.
> Q. Appendix B repeatedly refers to $d\_{emb}$ and seems to use $d\_{emb}$ to denote the maximum length of the input string. However, $d\_{emb}$ typically refers to the embedding dimension of every token. Is there a relationship between the two?
Here we used embedding dimension $d\_{emb}$ for the input space as in “a two-digit number embedded in a dimension $d\_{emb}=5$ such as \_6\_1\_”. In other words, we used embedding dimension as the number of tokens that the input uses given the random spacing that we use. **This is completely independent of the embedding dimension of the Transformer architecture.** You’re right that this is not an ideal notation. We will update this.
> Q. In Appendix B.2 “Length generalization for addition”, why is generating an initial random answer ans\[0\] (starting with $ token) useful?
We generate a random sequence of tokens with size+2 elements starting with \\$ as ans\[0\]. In each of the steps afterward we shift ans\[i\] one unit to the right (losing the rightmost token) and concatenate the computed digit of the correct answer from the left, i.e., ans\[i+1\] \= \<computed digit of the answer\> ShiftRight(ans\[i\]). Note that initializing ans\[0\] with a random placeholder text ensures that the length of ans\[i\] does not change during different generation steps. If we had initialized ans\[0\] with an empty string, then ans\[i\] would have had $i$ tokens. As a result, during the length generalization, the Transformer would have had to write tokens on the positions it had never seen during the training which is impossible as we train from scratch and use absolute positional embedding. Hence, the initial random text for ans\[0\] ensures that the Transformer will not see unseen positions during generation. We expect one can avoid this placeholder by using an appropriate relative positional embedding (potentially in addition to pre-training).
We simply use \\$ as the first token of ans\[0\] so that the final answer can always be easily recognized (it’s the number left to the $).
> Q. Does “pointer” denote a separate pointer token for each possible input position? Are “\[00\]” one single token or 3/4 separate tokens? What is the meaning of "the value of the ith bit can be retrieved using the pointer", is this specially processed in the generation process? It seems that for different pointers the model needs to copy different positions in the input (the pointer position), doesn’t that result in a high locality?
In this example, \[00\] has 4 separate tokens. In the examples of lines 594 and 629, each character is an individual token (other than special tokens \<START\> and \<EOS\>). **We emphasize that we do not do any processing for the pointers during the generation and all is learned by the Transformer model itself.** By retrieving, we simply meant that the Transformer can learn to generate the token that is appearing at a specific location indicated by the pointer. We will further clarify this part and the tokenization process in our revision.
Also, note that the pointer retrieval is a low locality operation in the setting that we consider and hence easily learnable. For simplicity, assume that we have 2 pointer tokens \[pq\] and we have $N$ potential tokens (t\_0,..., t\_{N-1}) for retrieving. Denote the output by $Y$. For simplicity also assume that the probability for retrieving each of these tokens is the same. In this case, the locality is 1\. Indeed it’s enough for the model to attend to t\_0 to get a significant correlation with the output. More precisely, with probability $1/N$ (if the pointer is pointing to the first token), the output $Y$ will be equal to t\_0. Hence, the mutual information $I(Y;t\_0) = poly(1/N)$ (as we can determine $Y$ in $1/N$ of the cases based on t\_0). Therefore, retrieving based on the pointer is indeed a low locality task.
> Q. How does the inductive scratchpad work for more complex reasoning tasks such as mathematical reasoning?
Please see the global response on the broader applicability with a focus on math.
> Q. The theoretical result does not rigorously prove the “negative side” of the conjecture.
Indeed, we do not prove the “negative side of Conjecture 1” and are careful to never make that claim. We prove only a specific case falling under the negative side of Conjecture 1 that relies on this variant of the cycle task. We believe that this variant is interesting in view of prior lower-bound work as it does not follow from any previous papers providing negative results such as \[12, 13, 28\] (refs in our paper).
---
Rebuttal 2:
Comment: > Q. The presentation can be improved. In particular, in section 2, the authors should consider using shortened/less formal definitions and remarks and putting extended versions in separate sections or the appendix. For example: Conjecture 1: Can be shortened using “if and only if” instead of having 2 sentences for “if” and “only if”?
Conjecture 1 is if and only if for constant size alphabets; we will update this; please see the global comment at the beginning in that regard.
> Q. Remark 2 seems loosely relevant to the main flow of the paper. It is recommended to have a separate discussion section for these extensions and/or put them in the supplemental materials
Thanks for the suggestion, we will revise the structure.
> Q. Conjecture 2 is hard to understand on its own and is not explained from a higher-level perspective.
We will break Theorem 1 and Conjecture 2 into a definition and then a conjecture/theorem to enhance readability.
In the agnostic scratchpad version of the problem, we are giving the transformer access to a scratchpad but we have no prior knowledge on what the transformer should write in it. So, in order to train it to use the scratchpad well, we consider an entry it writes in the scratchpad to be “correct” if it leads to the transformer giving the right output at the end and “incorrect” if it does not and we define the loss of the scratchpad to be equal to the loss of the output. We can add more explanation to the paper.
> Q. In Theorem 1, it’s unclear why are the label names a_i, b_i, c_i necessary? It seems that the label names do not impact the correctness of the theorem and thus their descriptions can be removed for clarity of the theorem.
The justification for the vertex names in Theorem 1 is not trivial. In Theorem 1 the input is formatted in such a way that it can be divided into blocks that each specify how a_i, b_i, and c_i are connected to a_{i+1}, b_{i+1}, and c_{i+1} for some i (each corresponding to a permutation in $S_3$). Each block has 6 possible values and these blocks are independent except for the fact that their overall product is always an even permutation. If we were to change the way we assign labels to vertices in such a way as to make it so that the set of vertex names mentioned in a given block was not always the same then that would no longer be the case and the proof would require additional arguments.
---
Rebuttal Comment 2.1:
Title: author-reviewer discussion
Comment: Thank you for your review of this paper. The authors have posted rebuttals.
Please could you enter into a discussion with them by replying to their rebuttal
of your review. A simple 'Thank you for your rebuttal' acknowledges that you
have read their rebuttal, even if you feel that their rebuttal does not require a
detailed reply. Of course, a detailed reply is always preferable.
Thanks!
Area Chair | Summary: In the context of Chains-of-Thoughts prompting, Transformer models can solve more reasoning problem when recording their intermediate reasoning steps on a 'scratchpad'. This paper attempts to formulate the hardness of reasoning tasks (i.e., locality barrier), and explore the limitation of 'scratchpad' in seq2seq (i.e., Transformer) reasoning. Three kinds of scratchpads (agnostic, educated, and inductive) are studied, and their effectiveness in breaking the locality barrier are empirically validated on a synthetic cycle task (against their locality degree).
Strengths: - Given the emerging need of LLM reasoning, the studied subject is of great interest, both theoretically and practically.
- This paper is mostly well-written and easy to follow. The mathematical formulation of locality degrees is natural.
- I especially appreciate the introduction/formulation of the educated and inductive scratchpads (though I am not sure if they are first formulated by the authors or not). Unlike standard agnostic scratchpads, both educated and inductive scratchpads serve as theoretical testbeds and provide practical guidance for building effective LLM reasoning pipelines.
Weaknesses: I don't see any major weakness except for that the formulated inductive scratchpads bear some similarity with a previous design of scratchpads based on dynamic programming (https://openreview.net/pdf?id=qHrADgAdYu). Some comparisons and discussions could be used here.
Technical Quality: 3
Clarity: 4
Questions for Authors: - line 229, 'Parity functions are known to be hard to learn []': I guess a reference is intended but missed here.
- line 231, '(n-k) = w(1)': I don't quite understand what 'w(1)' means here. The whole sentence could use some elaboration. I believe another intended reference is missed as well.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We address the rest of the remarks and questions below.
> Q. What is the connection to the “Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective“ paper which considers dynamic programming tasks solved by scratchpad?
Thank you for introducing this paper, we will add it to our literature review. In comparing it with the inductive scratchpad part of our paper, the inductive scratchpad is indeed suitable for learning DP tasks. DP tasks can be implemented using a loop, and the inductive scratchpad can easily learn algorithms with loops. In order to learn such algorithms with loops one has to put all the variables/arrays that are updated in the loop in the state of the inductive scratchpad (including the counters of the loop). In that case, each state generated by the inductive scratchpad becomes similar to one iteration of the loop. The inductive scratchpads that we have provided for the parity, addition, and the cycles task can also be viewed as algorithms with loops.
**Nevertheless, we clarify that the mentioned paper does not use induction or any similar idea.** Note that in our paper, we make Transformers behave inductively on such reasoning tasks with the introduction of two special \<START\> and \<STATE\> (also denoted as \#) tokens and dynamic attention masking according to these new tokens. As a result, we obtain strong length generalization performances (e.g., almost double length for parity and addition tasks) while without the inductive scratchpad idea length generalization property is weak or absent (e.g., in the DP paper the length generalization is from 15 to 17/18). Further, note that the inductive scratchpad allows Transformers to work with reasoning tasks with longer scratchpads/contexts thanks to the dynamic attention masking. For example, the tasks considered in the DP paper have remarkably shorter scratchpads than the parity and addition tasks we have in our work and hence could be considered simpler for the Transformers.
On the theoretical side, the suggested paper focuses only on the representation power of constant-depth transformers. It shows that there are tasks that cannot be represented by constant-depth Transformers, however, they can be expressed by a constant-depth Transformer with a scratchpad. We note that our paper considers learnability hardness (beyond expressivity). For example, the cycles task can be represented by polynomial size transformers with logarithmic depth however cannot be learned by them as shown in Theorem 1.
> Q. Missing reference on line 229.
Thanks for noticing the typo. The intended reference was \[12\] (Emmanuel Abbe and Colin Sandon. Polynomial-time universality and limitations of deep learning. Communications on Pure and Applied Mathematics, 76(11):3493–3549, 2023).
> Q. Line 231: What does $k\wedge (n-k) \= \\omega(1)$ mean? Elaborate on the whole sentence.
Here, by $k\wedge (n-k)$ we meant $\\min\\{k, n-k\\}$ operation and by the little omega $\\omega(1)$ we meant any quantity that grows at any rate beyond constants (as $n$ scales). In other words, a parity function of degree $k$ is learnable in polynomial time only if either $k$ or $n-k$ is constant. Note that the locality of the parity task is also equal to $\\min\\{k,n-k\\}$ (because of the histogram). Therefore, the constant locality requirement is consistent with prior works on the hardness of parities. Note that in the locality definition, we have access to the histogram of all tokens, and as a result, the full parity $x\_1x\_2 \cdots x\_n$ has locality $0$. Thus, thanks to the histogram, the hardness of learning for $x\_1x\_2 \cdots x\_k$ (degree $k$ parity) and $x\_{k+1} \cdots x\_{n}$ (degree $n-k$ parity) is similar, hence we use $\\min\\{k, n-k\\}$.
---
Rebuttal Comment 1.1:
Comment: Many thanks for the detailed response! | Summary: This paper proposes the concept of a "locality barrier" and conjectures that transformers can (weakly) learn to solve a problem only if it doesn't have a locality barrier, i.e., doesn't require global reasoning (as specifically defined by the authors). They prove, in particular, that for the cycle task that they propose, there exists a locality barrier and that transformers struggle to learn it.
These results about learnability of Transformer models nicely complement recent literature on their expressivity.
Lastly, the authors propose the concept of an "inductive scratchpad", which takes the special form Q#state1#state2#... and uses attention masking, etc., such that predicting state5 from the entire prefix looks just like predicting state5 from Q#state4#, forcing models to learn an explicit state representation and to condition only on it in a Markovian way. The paper reports empirical results showing that when trained this way, models generalize better to larger input length.
Strengths: * The notion of "locality barrier" seems interesting and, to my (somewhat limited) knowledge, novel. The study of learnability in transformers is a good complement to recent studies on transformer expressivity.
* The idea of "inductive scratchpad" is intuitive. (I assume it is novel, deferring to other reviewers on novelty.) Using masking to force models to pay attention only to the current state (and the full input) seems effective for certain problems, and experiments confirm this.
* The intuitive claims / conjectures in the paper are supported by sufficient focused experiments.
Weaknesses: * Intuitively, I don't quite see why global reasoning (as suggested by the proposed locality barrier) should, in fact, be hard for transformers. After all, full attention gives them access to the entire input and entire sequence of past states, in the case of a scratchpad. In fact, having "all to all attention" is one of the key distinguishing strengths of a Transformer model, compared to, say, RNNs. Could the authors clarify why global reasoning seems to be a fundamental hurdle? (Note that some prior papers have noted *sequentiality* rather than global reasoning as a big hurdle.)
* While the paper tries to be formal with theorems and conjectures, the way it is written is somewhat informal. E.g., conjecture 1 mentions phrases like *inverse polynomial edge* without full clarification, as far as I saw); Theorem 1 and Conjecture 2 are written in a very lengthy and verbose way (it would be more standard to define concepts in text, and then have a succinct theorem/conjecture about them).
* The paper can use some more intuition. E.g., the remark right after Definition 2 (distribution locality) tries to explain what the definition is trying to capture. However, I still struggled to understand it fully. E.g., why exactly is the mutual information targeted to be $n^{O(-1)}, what's the role of the *histogram* of tokens, etc., can be better clarified. Lemma 1 is an important illustration of the notion of locality, but is stated with only a brief explanation. A sketch of the proof from the appendix would be valuable. In general, wherever proofs are deferred to the appendix, a mention of this in the main paper (ideally with a brief proof sketch) would be useful for the readers.
* While the notion of an *inductive scratchpad* is nicely general, the specifics of the state are highly dependent on the problem at hand. The authors showed that if the notion of state is appropriately chosen, models are able to learn (and generalize) on the few considered tasks. What remains unclear is, whether appropriate notions of state can be learned for new tasks and whether this concept works well in the popular "in-context learning" setting.
MINOR (typos etc.):
* line 12: "breaks the locality barrier" instead of "breaks the locality"
* line 45: did you mean test accuracy of more than 80% or test error?
* line 85: [16] seems to be about chain of thought and relation to Turing machine based computation classes. Here and in some other places, did you mean the paper titled *The Parallelism Tradeoff: Limitations of Log-Precision Transformers* by the same authors? That work showed TC^0 as the bound, not TC^1, though the generalization you mention in line 171 is accurate.
* line 193: I'm not sure why a_i etc should be uniform random. Shouldn't the vertex be labeled a_i (and not b_i) is it's at distance i from a_0?
* line 307: did you mean "as if the input was"? (rather than "as the input was")
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see notes about clarifications in the weaknesses section above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We address the rest of the remarks and questions below.
> Q. Intuitively, why is global reasoning hard for transformers?
Please see first the global response. For more intuition: we train transformers using a gradient descent algorithm, which means we need to have a significant gradient in order to make progress. I.e., we need the target function to have a nontrivial correlation to the derivative of the function the transformer is currently computing with respect to at least one of its weights. If there is a constant-sized set of inputs that can be used to predict the target function with nontrivial accuracy then that means there is a low-degree polynomial in the inputs that is correlated with the target function, and we would expect at least some of the derivatives to be significantly correlated with it. However, if there is no such set of inputs then that means that no low-degree polynomial is significantly correlated with the target function. There are a superpolynomial number of uncorrelated high-degree polynomials, so the derivatives of the transformer with respect to its weights cannot be nonnegligibly correlated with a nonnegligible fraction of them. So, we will tend to be unable to learn the target function in that case.
> Q. Terms like *inverse polynomial edge* in Conjecture 1 can be further clarified. Theorem 1 and Conjecture 2 are written in a verbose way.
Please see the general comments. We can split Theorem 1 and Conjecture 2 into a definition part first and then the theorem/conjecture statement to enhance the readability.
> Q. The paper can use some more intuition. E.g., why is the mutual information targeted to be $n^{-O(1)}$? What's the role of the histogram. Lemma 1 can be further explained. A sketch of the proof from the appendix would be valuable.
We can certainly add more intuition, differing other parts to the appendix. Please see the general comment and the previous question’s answer as a starting point. Regarding Lemma 1’s intuition: if you see only $n-1$ edges in this task, then those $n-1$ edges do not tell you anything useful because you miss one edge to conclude that you have in fact a short n-long cycle closing, or instead a longer path in the (2n)-long cycle not closing. Because of the uniform distribution on how the vertices are labeled, the two scenarios are equally likely and you can thus not infer any useful information. This means that you need at least $n$ edges to get at best a non-zero correlation, so the locality is at least $n$. Further see Appendix E.
The role of histogram is also natural for the Transformer’s architecture. To see this, consider removing positional embeddings; the remaining architecture would see the input as a set, using the number of times each token is appearing. This effect can also be observed if for example positional embeddings are initialized small enough. As a result, Transformers can easily access the token count (histogram) information of the input. Here, the histogram is similar to the bag of words in the natural language processing domain. We will add more details on this.
Also, note that there is a sketch of proof at the beginning of Appendix D.
> Q. While the notion of an *inductive scratchpad* is nicely general, the specifics of the state are highly dependent on the problem at hand. How can other inductive tasks be learned, in particular using "in-context learning".
Please see our general response on the broader applicability of the inductive scratchpad. In-context learning of new inductive tasks is indeed an interesting future research direction. We can imagine two settings for in-context learning of inductive tasks. (1) Having a normal pre-trained model (without the inductive scratchpad implementations or tokens). With the general advancement of LLMs, we think in-context learning of some inductive tasks may be possible. However, the family of such tasks will probably be limited as the model needs to see similar tasks to the target in question during pre-training. Also, the (OOD) performance of the induction would probably be much more limited than the architecture that enforces the inductive behavior. As seen in our paper, even training on standard scratchpad fails at OOD generalization. (2) The second setting is to have a model with inductive scratchpad (with the special tokens) pre-trained on other inductive tasks. In this setting, in-context learning of new inductive tasks seems more likely; nevertheless, this direction requires new investigations for future works.
> Q. Line 85: Did you mean to cite the paper titled The Parallelism Tradeoff: … by the same authors? That work showed TC^0 as the bound, not TC^1, though the generalization you mention in line 171 is accurate.
You are right we will cite [The Parallelism tradeoff…] when there is no CoT, and you are also right it should say TC^0 instead of TC^1 there.
> Q. In line 193, shouldn't the vertex be labeled a\_i (and not b\_i) if it's at distance i from a\_0?
The 3 nodes that are at distance i from a\_0, b\_0, c\_0 are randomly called a\_i, b\_i, c\_i. If we always called the vertex i edges from a\_0 a\_i then solving the problem would reduce to checking if the next vertex after a\_{n-1} was a\_0, which would be easy to learn. Randomizing the names is necessary to make it harder. Naming the vertices in a manner that does not specify how far they are from a\_0, b\_0, or c\_0 would probably make it even harder to learn, but it would not work with our current proof technique, so we are sticking with these vertex labels for now. And again, the original cycle task is also expected to be hard enough, but this would likely require even more proof technicalities.
We also thank the reviewer for the typos; we will fix them. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive comments. We address some of the remarks and questions in the list below.
**I. On the “locality/globality” message:** this paper puts forward the claim \-with both theoretical and experimental results supporting it- that Transformers can efficiently learn (i.e., achieve non-trivial generalization of edge n^{-O(1)} or O(1) in at most polynomial time in the input dimension) if and only if the target at hand has constant *distribution locality*. The “if and only if” is discussed under some assumptions (e.g., regular distributions that prevent non-trivial learning by simply memorizing the output for a very high probability input, or largely scaling alphabet size). It is important to note that:
* **A constant distribution locality means that the target correlates weakly with a constant number of inputs, not that the target only depends on a constant number of inputs**. The target “weakly correlates” with some inputs means that the correlation is inverse polynomial with these, i.e., $n^{-c}$ for some constant $c$. This covers many functions that depend on many/all inputs, and that are thus “global” in that sense. For instance, the majority function of all the tokens correlates weakly at $1/\\sqrt{n}$ (because there is a $1/\\sqrt{n}$ chance that all the other inputs are tied up and flipping one makes a difference) with any single input, so its locality is at most 1, and in fact it is efficiently learnable by Transformers. Moreover, the formal definition of distribution locality allows for a little more than that, because there is also the histogram of all the input tokens that is given in the mutual information. This means for instance that the modular-sum target (i.e., sum modulo $q$ when tokens are on a field with $q$ elements, or parities when $q=2$), which does not correlate with a single input as opposed to the majority function, is still efficiently learnable because this time the histogram takes down the locality. The intuition of the histogram is that when a function is symmetric (permutation invariant) on the inputs, then the model can ignore the positional embeddings and just compute the function from the histogram which gives major dimensionality reduction (the histogram knowledge is similar to the bag of words in natural language processing). So indeed, Transformers can learn some tasks that are “global” in the sense of involving many variables. **The globality notion that we claim is truly hard for Transformers (the non-constant locality) means that no constant number of inputs (on top of the histogram) can even weakly correlate with the target**. Examples where this takes place are target like the modular sum of an unknown but fixed subset of say half the inputs, which was shown in prior works to be hard (experimentally or theoretically under technical assumptions, see refs in the paper). With our definition of the distribution locality, we can now capture many more target classes than this specific case, such as the cycle task, with a **precise, general, and concise definition**.
(Also we indeed allow here for all to all attention, as we want the model to be as general as possible.)
* **The failure of efficient learning does not follow from a limitation of the model expressivity:** A regular Transformer as defined in our paper can in fact encode the cycle task of Theorem 1\. The reason why the model cannot learn efficiently is that it is trained by a descent algorithm with a random initialization.
**II. Broader applicability of the inductive scratchpad idea.** We believe that the inductive scratchpad idea and more generally **the idea of dynamically masking the input based on the generated tokens to remove unnecessary information** will be helpful for reasoning tasks beyond the specific algorithmic/arithmetic tasks considered here. For instance, consider mathematical reasoning as raised by a reviewer. When proving a maths result, one typically breaks the argument with multiple steps, aka lemmas, that provide the dynamic flow of the proof. The proof of a new lemma $n$ relies on some of the previous lemmas from $1$ to $n-1$, but ideally the proof of the new lemma can be composed on the previous lemmas without requiring their proofs; otherwise, there exists a more composable proof that does not repeat unnecessary arguments. Thus, we expect that an approach that masks the unnecessary information (here, proof of the previous lemmas, or unnecessary lemmas which sometimes lead to an intermediate theorem) to be indeed useful for reasoning. In the inductive scratchpad framework, this corresponds to having states that only include all of the lemmas proven to the moment, the current objective (claim that the model is trying to prove), and the steps that the model is taking towards proving the current objective. **This should generally allow the model to handle computationally larger solutions/context sizes and better compose in OOD settings, thus eventually improving the model’s performance as was seen in the considered tasks.** Implementing this for more general tasks such as maths requires nonetheless major dataset preparations which is beyond the scope of the current paper.
**III. Formal and informal statements.** We had reviewers requests on both sides. We can write very formal and very informal versions of the results. For instance, Conjecture 1 informally reads as “Regular transformers can efficiently weakly learn a well-behaved distribution if and only if the distribution has constant locality”, while the formal version add all quantifiers: there exists $c_1,c_2=O_n(1)$ such that in time $n^{c_1}$ the Transformers trained by SGD (details of what this means) learns the target function with an accuracy $\geq 1/2 + n^{-c\_2}$, etc. Some may have to go to the appendix due to space limitations. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models | Accept (poster) | Summary: The authors address the problem of data augmentation for classification tasks by proposing a novel approach that utilizes separate energy-based models (EBMs) to generate synthetic data for each class. These EBMs are derived directly from the logits of a binary classifier whose goal is to distinguish real data from negative data. These logits (their logsumexp over positive and negative classes) are reinterpreted them as an energy function for each class.
In order to avoid training these classifiers on the data itself, the authors take advantage of a pre-trained Prior-Data Fitted Network (PFN).
The authors perform a comprehensive evaluation of their method comparing it to competing methods for data augmentation on a number of small real world datasets. They show that their method is the only one that offers a consistent improvement in classification from the addition of synthetic data.
Strengths: - The paper is well-structured and clearly written, effectively communicating the key ideas and concepts.
- The authors introduce a novel approach by leveraging a Prior-Data Fitted Network (PFN) to obtain energy-based models (EBMs), which, to the best of my knowledge, has not been previously explored.
- The experimental results section is thorough and comprehensive. They compare their approach to a wide range of relevant and competitive methods for synthetic data generation.
- The appendices are well-organized and provide valuable supplementary information. The paper strikes a good balance with the most relevant information presented in main text and additional details provided in the appendix.
Weaknesses: - Interpreting classifier logits as energy functions may not necessarily yield good density models. Given a classifier with logits $f(x)[y]$, one can add an arbitrary function $g(x)$ that is constant over $y$, $f(x)[y] \rightarrow f(x)[y] + g(x)$, without affecting the conditional probabilities $p(y\vert x)$ due to the invariance of the softmax function to the addition of a constant. However, this changes the energy function for the density model to $E(x) \rightarrow E(x) - g(x)$. Consequently, there exists a functional degree of freedom that can arbitrarily alter the energy function without impacting the discriminative loss optimized by a classifier. As a result, a classifier trained solely to distinguish between positive and negative samples is not inherently encouraged to learn a useful energy function for the density over $x$.
My interpretation of the results of [1], from which the authors seem to draw inspiration is that reinterpreting the logits of a classification model as an energy based model by itself accomplishes nothing if the training process of the original classifier is not changed to encourage a good density model to be learned and encoded into the free degree of freedom. [1] achieves this by optimizing the joint likelihood, $E_{x,y} p_\theta(y,x)$, instead of the conditional likelihood, $E_{x,y} p_\theta(y\vert x)$, used in training classifiers. This significantly complicates training by requiring sampling from the EBM during training. Therefore, it is unclear why the authors assume that the pre-trained PFN has learned useful energy functions for density modeling when it is derived from a simple classifier and this is not discussed in the paper.
- Another weakness of the paper is in the experimental setup where only default hyperparameters are used for all competing methods. For many of these methods, tuning hyperparameters can substantially improve their results and the default hyperparameters may not be optimal.
Personally, I would find the presented results more convincing if the paper compared with fewer competing methods but made a better effort of getting good results out of those as it is impossible to know how much the authors tuned the degrees of freedom available during experimentation with their own proposed method.
[1] Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, 2022 (https://arxiv.org/abs/1912.03263)
Technical Quality: 2
Clarity: 3
Questions for Authors: - Can you provide a convincing explanation for why the logits from discriminative classifiers derived from TabPFN should be useful when reinterpreted as a class-conditional energy function in light of the first point above? Without a clear justification, it is difficult to understand the basis for expecting this method to work effectively.
- Since the authors marginalize over $y$ by taking the LogSumExp over the $y=0$ and $y=1$ logits corresponding to negative and empirical class data, respectively, one would expect the energy function to represent a mixture distribution between these two densities. Why aren't the logits for $y=1$ used directly as an energy function instead? In other words, what is the motivation for marginalizing over $y$ when it is simply an indicator of whether the data is real or negative?
- The details of how the negative examples are generated are not clear. The text simply states:
"These surrogate negative samples $\mathcal{X}^{neg}_c$ are constructed to be far from the distribution of real data, ensuring that the classifier can easily distinguish them. This placement prevents class ambiguity and facilitates a robust energy landscape". The appendix does provide some additional details, but it is still ambiguous how this data is generated. Given the potential importance of this process in understanding why the method works, could you provide more clarity on how the negative data is generated and discuss how changing the distribution of negative data affects the results?
- The authors helpfully point out all datasets that were also used as test sets for TabPFN's evaluation since these could potentially be tainted if good practices were not followed on the original TabPFN paper [1].
I am more concerned however with datasets that were part of the meta-validation set that was used to tune the hyperparameter priors of TabPFN. It seems that one of the datasets used for evaluation of TabEBM (stock) is part of this meta-validation set (Tables 7 and 8 in the appendix of [1]).
Even though this is only one of 150 datasets that comprise this validation set and it is not clear in the original paper the extent of this finetuning, I would still consider it good practice to avoid using this dataset for evaluating downstream methods for fear of data leakage. I also would have liked the paper to prioritize evaluating on more datasets that weren't used at all in [1]. I understand if [1] might have somewhat exhausted the pool of small datasets in OpenML but evaluating on larger datasets by first subsampling them would be an option. This would also allow for the use of real data as an ideal baseline to measure TabEBM's improvement against.
- For TabEBM and the other methods that learn class conditional densities (CTGAN, TabDDPM, ARF, GOGGLE, TabPFGen) what distribution of classes (i.e., $p(y)$) was used for the synthetic data? This is not specified in the paper so I am assuming it was an empirical estimate of the p(y) from the training data otherwise this would be somewhat unfair when comparing to methods that learn a joint distribution. Furthermore it would be potentially problematic when evaluating a balanced accuracy in a test set if the p(y) of generated data is not the same across methods (for conditional methods).
- How are statistical fidelity results computed? The metrics reported seem to be univariate ones so how are they being used to compare multivariate distributions? Are only marginal distributions being compared? Is it an average over univariate metrics computed for each marginal separately? How is the KS test used on categorical variables? This section of the experiments is sorely lacking in details and the appendices seem to provide no further information. If it is indeed the case that only marginals are being compared I find this evaluation lackluster since it is easy to achieve similar marginal distributions without learning the relationships between different input variables which is the main difficulty in learning multivariate density models.
[1] Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter. TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second, 2022 (https://arxiv.org/abs/2207.01848)
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors aknowledge that due to the reliance of the method on a PFN it is currently constrained for use in small datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback! We address **all** your questions and comments below, and provide a **rebuttal PDF** in the general rebuttal. Due to space limits, we summarise the new results. We will update our manuscript with additional clarifications and results.
## >Q3a: Clarify the generation of the negative samples
The surrogate classification task is to determine if a sample belongs to class $c$ by comparing $\mathcal{X}_c$ against a set of "negative samples" $\mathcal{X}_c^{\text{neg}}$. We label the true samples $\mathcal{X}_c$ as 1 and the negative samples $\mathcal{X}_c^{\text{neg}}$ as 0, and train TabPFN on this datasets, resulting in a class-specific binary classifier used to define the $E_c(x)$ (using Equation 3)
We generate the negative samples at the corners of a hypercube in $R^D$. For each dimension $d$, the coordinates of a negative sample are either $\alpha\sigma_d$ or $-\alpha\sigma_d$, where $\alpha$ is a fixed constant and $\sigma_d$ is the standard deviation of dimension $d$. For example, in $R^3$, a negative sample might have coordinates $[\alpha\sigma_1, \alpha\sigma_2, -\alpha\sigma_3]$. In the paper we use four negative samples with $\alpha = 5$, placing them far from the real data.
## >Q3b: How does the distribution of negative data affect the results?
We found it essential to use negative samples far from the real data, to allow the binary classifier to easily differentiate between $\mathcal{X}_c$ and $\mathcal{X}_c^{\text{neg}}$.
Figure `F1` from the PDF shows a **new experiment** varying the distribution of the negative samples. TabEBM infers an accurate energy surface with distant negative samples, and the energy surface becomes inaccurate when negative samples resemble real samples. This occurs because TabPFN is uncertain, affecting its logits magnitude and making them unsuitable for density estimation (we further investigate the logits in our next answer to Q1/W1).
TabEBM is robust to the distribution of negative samples if they are easily distinguishable from real data. We ran **two new experiments**, keeping the negative samples at the corners of a hypercube and varying their distance and number. When varying the per-dimension distance of those samples $\alpha \in [0.1, 5]$, TabEBM provides consistent improvements between 2.85-3.18%. Additionally, our results in Table `R2` (in our answer to Q1 from reviewer Cc3y) demonstrate that TabEBM performs similarly regardless of the number of negative samples in the surrogate binary tasks.
## >Q1/W1: Why can TabPFN's logits be useful for energy estimation?
Indeed, we agree with your observation! As the TabPFN classifier is trained on different class-specific surrogate tasks, it learns different logits for each task. We found it essential to place the negative samples far from the real samples, because TabPFN's confidence depends on the distance to the training data [1], as it was pre-trained to approximate Bayesian inference.
We present **new experiments** in Figure `F2` from the PDF. The top row shows that as the distance to the real data increases, the logit $f(x)[1]$ for the real data smoothly decreases until the two logits become similar. Thus, the classifier predicts is uncertain in these far-away regions. As maximum logits decrease, TabEBM's inferred density drops significantly (as shown on the bottom row) because $p_c(x) \propto (\exp(f(x)[0]) + \exp(f(x)[1]))$. One could possibly fine-tune TabPFN on the surrogate tasks to further improve the logits for density estimation.
## >Q2: Why fit p(x) rather than p(x|y=1) in the surrogate binary tasks?
Both approaches lead to virtually identical results due to the design of our surrogate task. The **new results** in Figure `F2` (bottom row) from the PDF shows that the TabEBM's inferred energy using $p(x)$ (in red) is essentially identical to the energy of $p(x|y=1)$ (in grey), especially near the real data. As the SGLD sampling starts near the original points (line 676), it will visit the neighbourhood of the real samples, where the energy surfaces are virtually identical, leading to similar results.
## >Q4: Evaluating more datasets
We run **new experiments** on six leakage-free datasets from UCI, with 1,000-9105 samples of 7-42 features. Table `R2` (in our response to Q2 for `tdoQ`) shows that TabEBM consistently outperforms the baseline and all other benchmark methods.
## >Q5: What's the class distribution of the synthetic data?
The synthetic data has the same class distribution as the real training data.
## >Q6: Is statistical fidelity computed over the univariate marginals?
Yes, these metrics are univariate, and we computed them using the open-source library Synthcity. We acknowledge the reviewer's point about the imperfection of univariate metrics, although evaluating generators' ability to capture the joint feature relationships remains underexplored ``[2]``.
## >W2: How does tuning the generators/predictors affect the ranking?
We run **new experiments** tuning three generators on three datasets using the average validation accuracy of six downstream classifiers. The ranges are: SMOTE ($k \in \{3, 5, 10\}$), TVAE ($\text{lr} \in \{5e-4, 1e-3, 5e-3\}$) and CTGAN ($\text{lr} \in \{1e-3, 5e-3, 1e-2\}$). TabEBM remains the most competitive data augmentation method, improving accuracy by +2.25, followed by SMOTE (+1.81) and CTGAN (-5.71). We can provide the full results in the discussion.
We also tuned the downstream predictors, and the **new results** in Table `R7` from the PDF show that TabEBM remains the best-performing method, providing the largest improvements for data augmentation, even after tuning the predictors.
Thank you again for your constructive review! We would appreciate it if you would consider raising your score in light of our response.
References:
- `[1]` McCarter, *What exactly has TabPFN learned to do?*, ICLR Blogposts, 2024
- `[2]` Tu, Ruibo, et al., 2024, (https://arxiv.org/abs/2406.08311)
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for the clarifications and for taking the time to run these additional experiments.
I must confess that the reason this approach works is still somewhat mysterious to me and I would like to have a better theoretical justification for it.
However, the linked blogpost and the author's additional experiments do shed some light on the matter and seem to suggest that this may be due to some property of TabPFN.
The authors have, at least, successfully convinced me that their surrogate task procedure works and is somewhat reliable and I believe these are valuable empirical results.
I believe the clarification on the generation of negative data should be in the paper.
Since I consider my concerns mostly addressed, I am happy to raise my score to a 6. | Summary: The paper proposes TabEBM, a novel data augmentation method designed for low-sample-size tabular classification tasks. TabEBM generates synthetic tabular data using class-specific Energy-Based Models (EBMs) to learn the marginal distribution for each class. Experimental results on various real-world datasets demonstrate that TabEBM improves downstream performance via data augmentation, generates high-fidelity synthetic data, and strikes a competitive balance between accuracy and privacy in data sharing.
Strengths: - Novel approach using class-specific EBMs for tabular data generation.
- Comprehensive evaluation across multiple datasets, metrics, and downstream tasks.
- Strong performance, especially in low data regimes.
- Thorough analysis of statistical fidelity and privacy preservation.
- Open-source implementation provided.
Weaknesses: - Scalability Issues: The reliance on TabPFN, which struggles with large sample sizes, limits TabEBM's scalability.
- Implementation Complexity: The need for class-specific surrogate tasks and the iterative nature of the sampling process may complicate implementation and increase computational overhead.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does TabEBM perform on datasets with hundreds of features?
- Could the approach be extended to use other pre-trained models besides TabPFN?
- How does the method handle highly imbalanced datasets where some classes have significantly fewer samples than others?
- What is the computational complexity of generating samples compared to other methods?
- How sensitive is the performance to the choice of hyperparameters in the SGLD sampling process?
- Clarify the meaning of the word "fit" which is used in line 122 of the paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Performance may degrade for datasets much larger than those tested.
- Inherits limitations of the underlying TabPFN model.
- May not be suitable for datasets with extremely high dimensionality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review! We address **all** your questions and comments below. Due to limited space, we summarised the new results. We will update the manuscript to include the complete new experiments and clarifications.
## >Q3: How does TabEBM perform on highly imbalanced datasets?
We conducted **new experiments**, adjusting the class imbalance on two binary datasets (with $N_\text{real}=100$). We keep the setup from Section 3 and report the balanced accuracy averaged over six downstream predictors. Table `R4` shows that under high class imbalanced, TabEBM outperforms the Baseline and SMOTE, which is a method specifically designed to handle imbalanced datasets.
*Table `R4`.* Test balanced classification accuracy (%) varying the class imbalance.
|**Datasets**|`(#Minority:#Majority)`|Baseline|SMOTE|TabEBM(Ours)|
|---|---|---|---|---|
|**steel**|`(50:50)`|88.79|77.74|**93.77**|
| |`(10:90)`|60.74|60.76|**73.02**|
| |`(5:95)`|52.29|56.11|**65.95**|
|**stock**|`(50:50)`|88.64|**90.21**|90.16|
| |`(10:90)`|76.05|83.86|**84.92**|
| |`(5:95)`|65.98|75.24|**77.68**|
## >Q5: How sensitive is TabEBM to the hyperparameters of the SGLD sampling?
We run **new experiments** varying two key hyperparameters of SGLD on the “biodeg” dataset with $N_{\text{real}}=100$. Specifically, we vary the step size $\alpha_{\text{step}}$ and the noise scale $\alpha_{\text{noise}}$, reporting the accuracy averaged over six downstream predictors.
Table `R5` shows that TabEBM is stable to these hyperparameters, as the difference between the highest and lowest accuracy is less than 1.5%. Note that increasing $\alpha_{\text{noise}}$ (which is added at each SGLD step) is expected to degrade performance because we standardized the data to have unit standard deviation.
*Table `R5`.* Classification accuracy (%) of TabEBM with different SGLD settings
|$\alpha_{\text{step}}\rightarrow$|0.1|0.3|0.5|1.0|
|---|---|---|---|---|
|$\alpha_{\text{noise}}\downarrow$| | | | |
|0.01|76.45|77.09|77.04|76.58|
|0.02|76.86|76.96|76.77|76.26|
|0.05|75.93|75.89|75.94|75.70|
## >Q6: Clarify the meaning of the word "fit" on line 122
We use the term “fit” to mean “train” the in-context model TabPFN. For TabPFN, “training” is similar to the K Nearest Neighbour algorithm, where "training" means defining the training dataset used for inference. Thus, we simply update TabPFN's training datasets and the model in inference mode, thus never updating its parameters.
## >Q4: How does TabEBM's computational complexity compare to other methods?
Figure 4 from the paper illustrates the trade-off between accuracy and the time required for training and generating 500 synthetic samples. The results show that TabEBM is the fastest method for generating data besides SMOTE. Other methods are 3-30 times slower than TabEBM. Also, TabEBM surpasses all other methods in improving downstream accuracy through data augmentation.
## >W2: Class-specific surrogate tasks and iterative sampling can complicate implementation and increase computational overhead
**On implementation complexity**: TabEBM simplifies implementation by eliminating the need for dedicated training, pipelines, protocols, and resources—making it ready to use for generating data. We provide an open-source implementation of TabEBM, allowing users to generate data with just two lines of code for new datasets. We believe this ease of use will encourage the widespread application of tabular data augmentation on small datasets.
**On the computational overhead**: As mentioned in our answer to Q4 and Figure 4, TabEBM is computationally efficient, being the fastest method for generating data besides SMOTE —while outperforming all competing methods.
## >Q2: Can TabEBM use other pre-trained models besides TabPFN?
Yes, TabEBM can be used with other pre-trained models besides TabPFN. In the paper, we use TabPFN because it is the only tabular model of this type with open-sourced weights. However, TabEBM is a general method for transforming classifiers into class-specific generators and can use any gradient-based in-context classifier that computes logits (through Equation 3). As new tabular foundational models are developed `[1]`, they can be readily integrated into TabEBM.
## >Q1/W1/L1/L2/L3: On the scalability and limitations of TabEBM
TabEBM is a method that uses an underlying binary classifier, thus its capabilities and limitations are directly influenced by the in-context model it employs (TabPFN in our case), as discussed in the paper (Lines 317-323). Since TabPFN handles only up to 100 features, this limitation is also inherited by TabEBM.
Our **new experiment** (Table `R6`) shows that TabEBM can be applied to larger datasets beyond TabPFN's limitation of 1000 samples. On larger datasets TabEBM still outperforms other generators (not shown due to space limits), but training on real data alone appears sufficient. This highlights TabEBM's usefulness in fields with limited training samples. Note that despite TabPFN's 10-class limit, TabEBM can handle unlimited classes by using TabPFN only for surrogate binary tasks.
As foundational models for tabular data evolve `[1]`, new models that can accommodate more features are anticipated. Integrating these models into the TabEBM framework will enable it to handle high-dimensional datasets, thus increasing its versatility and utility.
*Table `R6`.* Classification accuracy (%) comparing data augmentation with increased real data availability on the “texture” dataset
|$N_{\text{real}}$|Baseline|Improvement by TabEBM (%)|
|---|---|---|
|50|72.40|+6.50|
|100|82.42|+3.59|
|1000|96.37|-0.07|
|2000|97.76|+0.07|
|3000|98.20|+0.15|
|4000|98.51|+0.04|
Thank you again for your thoughtful review! We would appreciate it if you would consider raising your score in light of our response.
**References:**
- `[1]` Boris van Breugel, Mihaela van der Schaar, *Why Tabular Foundation Models Should Be a Research Priority*, ICML 2024 (https://arxiv.org/abs/2405.01147)
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for clarifying and conducting additional experiments. These efforts have significantly enhanced the quality of the research. However, I believe my initial evaluation remains valid, so my score remains unchanged.
---
Rebuttal 2:
Title: Additional clarifications and results on imbalanced datasets and large datasets
Comment: Dear Reviewer iojh,
Thank you for acknowledging the additional experiments and clarifications we provided. We are glad that you found that our efforts have significantly enhanced the quality of our research. We are also pleased that, in your original review, you highlighted several strengths of our work, including its novelty, comprehensive evaluation, strong performance, and open-source implementation.
We conducted **additional experiments** to further support our argument, which we believe will interest the reviewer. Specifically, we evaluated four additional generative models—TVAE, CTGAN, TabDDPM, and TabPFGen—addressing your questions on handling imbalanced datasets and the effectiveness of data augmentation on large datasets.
- Table `R5-extended` demonstrates that TabEBM outperforms these methods, especially on highly imbalanced datasets. The results indicate that TabEBM is particularly effective for class balancing, and we will highlight this application in the revised manuscript.
- Table `R6-extended` suggests that data augmentation may not be necessary when ample real data is available for model training. As the availability of real data increases, training downstream predictors solely on this real data can guarantee optimal performance. Existing methods, such as TVAE, CTGAN and TabDDPM, tend to decrease performance on smaller datasets, where performance enhancement is most needed, while TabEBM consistently delivers the largest improvements. This suggests that data augmentation is beneficial primarily in data-scarce scenarios, where TabEBM excels. We will include these results in the updated manuscript.
Given the new results and your positive feedback, we kindly ask if you might reconsider your initial score in light of these results. If there are any remaining concerns, please let us know, as we would be more than happy to discuss them further while the discussion period is open. We would greatly appreciate your feedback in order to further improve our paper.
Thank you again for your time and consideration,
Authors
Table `R5-extended`. Test balanced classification accuracy (%) varying the class imbalance. We aggregated the performance over six downstream predictors. TabEBM is effective for datasets with high class imbalance.
| Datasets `(#Minority:#Majority)` | Baseline | SMOTE | TVAE | CTGAN | TabDDPM | TabPFGen | TabEBM (Ours) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **steel** | | | | | | | |
| `(50:50)` | 88.79 | 77.74 | 74.21 | 77.61 | 83.78 | 89.67 | **93.77** |
| `(20:80)` | 74.33 | 66.56 | 68.10 | 61.52 | 69.06 | 83.38 | **84.65** |
| `(10:90)` | 60.74 | 60.76 | 54.85 | 53.50 | 58.46 | 70.36 | **73.02** |
| `(5:95)` | 52.29 | 56.11 | 51.12 | 51.30 | 51.17 | 64.84 | **65.95** |
| **stock** | | | | | | | |
| `(50:50)` | 88.64 | **90.21** | 88.67 | 85.52 | 89.23 | 88.16 | 90.16 |
| `(20:80)` | 85.34 | 88.95 | 85.87 | 85.37 | 87.16 | 88.72 | **89.38** |
| `(10:90)` | 76.05 | 83.86 | 74.60 | 76.93 | 78.81 | 83.52 | **84.92** |
| `(5:95)` | 65.98 | 75.24 | 62.66 | 61.11 | 68.69 | 75.56 | **77.68** |
Table `R6-extended`. Test balanced classification accuracy (%) aggregated over six downstream predictors, comparing data augmentation with increased real data availability of the “texture” dataset. TabPFGen is not applicable because it supports only datasets with up to 10 classes.
| N_real | Baseline | SMOTE | TVAE | CTGAN | TabDDPM | TabPFGen | TabEBM (Ours) | Improvement vs Baseline by TabEBM (%) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 50 | 72.40 | 76.40 | 55.332 | 54.80 | 62.94 | N/A | **78.90** | +6.50 |
| 100 | 82.42 | 84.35 | 66.00 | 69.49 | 76.34 | N/A | **86.01** | +3.59 |
| 200 | 87.54 | 89.29 | 78.37 | 82.44 | 82.53 | N/A | **89.77** | +2.23 |
| 500 | 92.96 | 93.69 | 90.09 | 91.48 | 91.24 | N/A | **93.76** | +0.80 |
| 1000 | **96.37** | 96.21 | 93.61 | 95.36 | 94.56 | N/A | 96.30 | -0.07 |
| 2000 | 97.76 | 96.84 | 96.62 | 97.10 | 97.13 | N/A | **97.83** | +0.07 |
| 3000 | 98.20 | 98.28 | 97.60 | 97.60 | 97.73 | N/A | **98.35** | +0.15 |
| 4000 | 98.51 | **98.59** | 98.11 | 98.00 | 98.46 | N/A | 98.55 | +0.04 | | Summary: The authors present a new method of tabular data augmentation, called TabEBM. The unique feature of TabEBM is that it creates distinct generative models for each class in a classification problem setting. With extensive and thorough evaluations, the authors prove that TabEBM sets the new state of the art.
Strengths: 1. The manuscript is well organized. The text is clear. The quality of the figures is high.
2. The authors address a very common problem for the broad scientific community. Application of machine learning algorithms on small tabular datasets is limited by design. Introducing TabEBM as an effective data augmentation method, this work can have an extremely broad impact in the future.
3. The experiments are rich and rigorous. The method is compared against many existing methods on a variety of datasets. The evidence proving its effectiveness is convincing.
4. The authors claim TabEBM will be available as an open-source library.
Weaknesses: Some important details on study design, model training, and practical considerations of TabEBM application are missing. See questions.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. The definition of a surrogate binary classification task (lines 111-113) might result in class imbalance. How is this potential problem addressed?
2. Can the authors provide more details on training the models for the surrogate binary classification task? This could eliminate some other questions.
3. Have the authors specifically investigated the propensity of TabEBM to generate outliers? How does TabEBM compare to other methods in this regard?
4. In section 3.2, the authors describe many statistical evaluations performed. However, the multiple hypothesis testing aspect is not described in sufficient detail. How were the p-values adjusted? How strong is the impact of a particular correction method on the final results (tables of C3)?
5. Can the authors share more insights about the open-source library? This is positioned as one of the key contributions, but the code is not provided and the practical considerations of using the library are not clear from the submission.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Briefly discussed in section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review! We address **all** of your questions and comments below. Due to space constraints, we summarised the new results. We will update the manuscript to include the complete new experiments and clarifications.
## >Q2: Provide details on training models for the proposed surrogate binary classification tasks
In TabEBM, for each class $c$, we train a class-specific EBM, $E_c(x)$, using exclusively the data from that class, denoted as $\mathcal{X}_c$. The energy function $E_c(x)$ is derived by adapting the logits from a binary classifier, as outlined in Equation 3. To train this classifier with only class-specific data, we create surrogate binary classification tasks.
Our surrogate classification tasks determine if a sample belongs to class $c$ by comparing $\mathcal{X}_c$ against a set of "negative samples," $\mathcal{X}_c^{\text{neg}}$. We intentionally create these negative samples at a distance of five standard deviations from the original data to ensure the binary classifier can easily differentiate between the real class data and these artificially distant samples.
We label the true class samples $\mathcal{X}_c$ as 1 and the negative samples $\mathcal{X}_c^{\text{neg}}$ as 0, forming a combined dataset $\mathcal{D}_c = (\mathcal{X}_c \cup \mathcal{X}_c^{\text{neg}}, \{1\}^{|\mathcal{X}_c|} \cup \{0\}^{|\mathcal{X}_c^{\text{neg}}|})$. Following this, we train the TabPFN classifier on $\mathcal{D}_c$, resulting in a binary classifier specific to class $c$. This classifier is then used to define the energy function $E_c(x)$ using Equation 3, from which we can generate data using SGLD.
## >Q1: Does the surrogate binary classification task result in class imbalance issues?
No, the imbalance between the "negative samples" and the real samples used in the surrogate binary classification task has a negligible impact on TabEBM's performance.
We conducted a new experiment to evaluate the effects of varying the ratio $\|\mathcal{X}^{\text{neg}}_c\|:\|\mathcal{X}_c\|$. To ensure reliable outcomes, we maintained a consistent ratio across all classes, keeping the same proportion of negative samples for each class. We varied the number of negative samples $\|\mathcal{X}^{\text{neg}}_c\|$ to represent various ratios of $\|\mathcal{X}_c\|$, thus simulating both balanced and highly imbalanced scenarios.
The results, presented in Table `R3`, computed across six datasets with $N_{\text{real}} = 100$ real samples, demonstrate that TabEBM is robust to potential imbalances in the surrogate binary tasks.
*Table `R3`.* Classification accuracy (%) on data augmentation, showing the impact of the number of negative samples $\|\mathcal{X}^{\text{neg}}_c\|$ in TabEBM across six datasets. TabEBM's performance is robust regarding the number of negative samples.
|Ratio$\|\mathcal{X}^{\text{neg}}_c\|:\|\mathcal{X}_c\|$|`1`|`0.5`|`0.2`|`0.1`|Fixed$\|\mathcal{X}^{\text{neg}}_c\|=4$|
|---|---|---|---|---|---|
|Average accuracy improvments|+2.91|+2.87|+2.88|+2.92|+2.90|
## >Q3: TabEBM's propensity to generate outliers
We provide fair and thorough comparisons by (i) including a wide range of evaluation metrics and (ii) utilising a large synthetic set (Lines 183-184). Given our extensive test scope (i.e., eight datasets $\times$ five sample sizes), it is computationally infeasible to employ complex and specialised methods to detect outliers in synthetic data. However, the downstream accuracy and statistical fidelity metrics are highly indicative of the distribution shifts between real and synthetic data. Notably, Figure 3 illustrates that TabEBM consistently outperforms the baseline across various sample sizes and class numbers. We believe these two metrics adequately demonstrate that TabEBM is more stable to generate in-distribution samples than benchmark methods.
## >Q4: Details on the multiple hypothesis testing in Section 3.2
We aim to provide a fair and coherent comparison between TabEBM and existing methods, and thus, we follow the widely-adopted evaluation process in prior studies. Specifically, we compute the statistical fidelity metrics with the open-source implementations from the well-established open-source library Synthcity. However, the previous studies often operate under the assumption that the issues associated with multiple comparisons are less pronounced in generating low-dimensional tabular data, hence correction methods for multiple hypothesis testing are seldom employed. We will further clarify this in the revised manuscript.
## >Q5: The code for the open-source library is not provided
We included the TabEBM library as a zip file in the Supplementary material. The library has two core functionalities:
1. *Generate synthetic data*: The library can generate data that can be used as additional training material for data augmentation. We have included a demo notebook, `TabEBM_generate_data.ipynb`, in the attached codebase. This notebook demonstrates how to use the generated data for data augmentation and shows a toy example comparing the quality and position of the synthetic data with the real data.
```python
from TabEBM import TabEBM
tabebm = TabEBM()
augmented_data = tabebm.generate(X_train, y_train, num_samples=100)
# augmented_data[class_id] = generated data for a specific `class_id`
```
2. *Compute & visualise the energy function*: We allow users to compute TabEBM's energy function and the unnormalised data density. The demo notebook, `TabEBM_approximated_density.ipynb`, shows TabEBM's energy under conditions of data noise and class imbalance.
We will make the TabEBM library freely available after publication. It is easy to use, domain-agnostic, and requires no training, making it suitable for data augmentation, especially for small-sized datasets.
Thank you again for your thoughtful review! We would appreciate it if you would consider raising your score in light of our response.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the clarifications and additional experiments!
I believe my initial evaluation was fair, so my score remains the same. | Summary: The paper introduces TabEBM, a class-conditional generative method for tabular data augmentation using distinct EBMs for each class. By modeling each class's data distribution individually, TabEBM generates high-quality synthetic data. Experiments demonstrate that using TabEBM for data augmentation improves classification performance across various datasets, particularly small ones.
Strengths: The paper introduces distinct class-specific EBMs for tabular data augmentation. Moreover, it presents thorough experiments demonstrating effectiveness across various datasets, especially small ones. Additionally, the writing is well-structured and clear.
Weaknesses: 1. On the technical level, TabPFN, EBMs, and SGLD have all been introduced in TabPFGen [48]. The main difference in the proposed TabEBM method is the use of class-specific EBMs. However, this idea is straightforward and lacks novelty.
2. The most significant difference between tabular tasks and common tasks like images is that they include both continuous and categorical features. However, TabEBM treats all features as continuous, lacking targeted consideration for categorical features.
3. In TabEBM, although the choice of surrogate binary classifier is arbitrary, not using training-free models like TabPFN will result in training costs being positively correlated with the number of classes, significantly increasing the training overhead.
4. The datasets used in the experiments have a maximum dimensionality of 77, lacking experimental results on high-dimensional data (e.g., hundreds of dimensions).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Do the datasets used in the experiments only contain continuous features and no categorical features?
2. The TabEBM method is similar to TabPFGen [48]. Why didn't you use the 18 datasets mentioned in the TabPFGen paper for experiments, but instead chose 6 different datasets (with a smaller test scope) for evaluation?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review! We address **all** of your questions and comments below. Due to space constraints, we summarised the new results. We will update the manuscript to include the complete new experiments and clarifications.
## >Q1/W2: Do the datasets contain categorical features? How does TabEBM handle them?
Yes, we evaluated TabEBM on datasets containing both continuous and categorical features. Specifically, Table `R1` shows the two considered datasets with mixed feature types. In lines 639-640, we also clarify that TabEBM encodes the categorical features with leave-one-out target statistics. As the reviewer mentioned, the evaluation results (Table 1) show that TabEBM is effective across various datasets, including those containing categorical features.
*Table `R1`.* Number of categorical features in the datasets considered.
|Dataset|#Categorical|
|---|---|
|**Main text**||
|protein|4|
|energy|1|
|**New datasets in our response to Q2**||
|support2|7|
|mushroom|22|
|abalone|1|
|statlog|13|
## >Q2: Why not evaluate the 18 datasets in the TabPFGen paper?
**Firstly,** we wanted to avoid data leakage and ensure a robust comparison by also evaluating datasets not in TabPFN's meta-test (Appendix F.3 in its original paper). As the 18 datasets in the TabPFGen paper were all used for constructing and evaluating TabPFN, we also use different, leakage-free datasets. **Secondly,** our test scope is not smaller than that of the TabPFGen paper. We evaluate a wide range of test scopes, considering different datasets and varying the availability of real data. We also include datasets with categorical data, unlike TabPFGen, which investigates only numerical features. Our setup (i.e., eight datasets × five sample sizes) leads up to 33 different test cases. For reference, ARF has the most test cases among prior studies, with 20 test cases, while TabPFGen follows with 18 test cases. **Thirdly,** we further provide **new results** on six more leakage-free datasets for UCI with 7-42 features, some including categorical features. We set $N_\text{real}=100$ and provide results for the Top-5 benchmark methods. Table `R2` shows that TabEBM continues to outperform all other benchmark methods.
*Table `R2`.* Classification accuracy (%) comparing data augmentation on six *new* leakage-free datasets.
|Datasets|Baseline|SMOTE|TVAE|CTGAN|TabDDPM|TabPFGen|TabEBM (Ours)|
|---|---|---|---|---|---|---|---|
|clinical|68.63|71.07|61.80|65.21|54.03|69.66|**71.20**|
|support2|64.23|65.60|60.70|59.14|58.31|64.34|**65.28**|
|mushroom|95.51|95.84|93.75|93.26|79.87|**97.05**|96.82|
|auction|51.90|57.35|53.09|52.35|51.14|56.82|**57.97**|
|abalone|11.59|N/A|8.49|7.72|9.95|N/A|**13.56**|
|statlog|56.22|57.30|53.12|55.55|53.07|57.65|**57.85**|
## >W1: The core ideas “have all been introduced in TabPFGen” … "this idea is straightforward and lacks novelty”
TabEBM makes a novel contribution to the field of tabular data augmentation by being the first to introduce class-specific generators, as we discussed in our related work, as well as acknowledged by all other reviewers.
Specifically, TabEBM is the first method to create distinct class-specific EBMs. It consists of individual models—one for each class—designed to learn distinct marginal distributions for the inputs associated with each class. TabEBM's *class-specific generation* is not available to TabPFGen, and it is only possible due to our proposed *surrogate binary tasks*, which enable the creation of a binary classification task for each class and the obtaining of the class-specific EBM via Equation 3. The results demonstrate that training class-specific EBMs significantly improves the inferred energy over TabPFGen under high levels of noise (Figure 7) and high data imbalance (Figure 8). Unlike TabPFGen, which supports only 10 classes, TabEBM can handle an unlimited number of classes. As you rightly point out, our extensive experiments demonstrate that our method, with its class-specific generation, outperforms all other methods in accuracy, statistical fidelity, and privacy.
In addition to the method, our paper makes two additional key contributions. First, we conduct the first extensive analysis of tabular data augmentation across various dataset sizes. Our benchmark reveals that existing tabular augmentation methods may improve performance only on small datasets, a previously uninvestigated issue. Second, we will release our method as an open-source library for tabular data augmentation, allowing users to generate data immediately without additional training.
## >W3: “In TabEBM … not using training-free models like TabPFN will result in … significantly increasing the training overhead.”
Because we tackle classification problems on *small-size* datasets, training models from scratch is not practical, often leading to suboptimal performance due to overfitting. Instead, using pre-trained models, such as TabPFN, is integral to our approach. From a practical standpoint, using pre-trained models makes TabEBM immediately usable. A significant limitation of existing tabular generators is their need for training, which can be time-consuming, error-prone, and typically requires GPUs. In contrast, our TabEBM library (included with the submission) allows users to start generating data immediately without additional training or tuning.
## >W4: Lacking experimental results on high-dimensional data
As noted in Lines 317-323, scaling up in-context tabular models for high-dimensional data remains an unresolved challenge and is not the primary focus of our study. Instead, TabEBM is designed to demonstrate the effectiveness of using pretrained in-context tabular classifiers for generating tabular data. As acknowledged by the reviewer, and further shown in our response to Q2, our "thorough experiments" demonstrate TabEBM's effectiveness.
Thank you again for your thoughtful review! We would appreciate it if you would consider raising your score in light of our response.
---
Rebuttal 2:
Comment: Dear Reviewer tdoQ,
Thank you once more for taking the time to provide your feedback on our work. In light of our response and the new experiments, and with the discussion window closing soon, please let us know if you have any further questions or if there is anything else we can clarify. If not, we would appreciate it if you would consider updating your review based on our rebuttal.
Thank you,
Authors
---
Rebuttal Comment 2.1:
Comment: I appreciate the additional experiments conducted by the author. These evaluations addressed my concerns regarding the experiment scale and performance for datasets with many categorical features. I have increased my overall rating for this paper from Reject to Borderline Accept. | Rebuttal 1:
Rebuttal: We thank the reviewers for the feedback!
## **(1) Summary of positive things**
- **Novel method**
- `Cc3y`: *“a new method of tabular data augmentation”; “the unique feature is that it creates distinct generative models for each class”*
- `iojh`: *“Novel approach using class-specific EBMs for tabular data generation”*
- `XdB4`: *“a novel approach that utilizes separate energy-based models (EBMs) to generate synthetic data for each class”*
- **Comprehensive evaluation showing TabEBM's effectiveness**
- `tdoQ`: *“thorough experiments demonstrating effectiveness”*
- `Cc3y`: *“extensive and thorough evaluations”; ”TabEBM sets the new state of the art”*
- `iojh`: *”Comprehensive evaluation across multiple datasets, metrics, and downstream tasks”; “TabEBM improves downstream performance … generates high-fidelity synthetic data”*
- `XdB4`: *“The experimental results section is thorough and comprehensive”*
- **Clear presentation**
- `tdoQ`: *“well-structured and clear”*
- `Cc3y`: *“The manuscript is well organized”; “The text is clear”; “The quality of the figures is high”*
- `XdB4`: *“The paper is well-structured and clearly written, effectively communicating the key ideas and concepts.”*
- **Impactful method**
- `Cc3y`: *“The authors address a very common problem for the broad scientific community”; “this work can have an extremely broad impact in the future”*
- **Reproducibility**
- `iojh`: *“Open-source implementation provided”*
## **(2) Summary of our responses and new experiments**
We replied to **all** questions and concerns raised by the reviewers:
*Table R0*. Number of our responses to the reviewers’ comments (`#Raised`/`#Replied`).
| Reviewer | # Questions | # Weaknesses | # Limitations | # New experiments |
| --- | --- | --- | --- | --- |
| `tdoQ` | 2/2 | 4/4 | N/A | 1 |
| `Cc3y` | 5/5 | N/A | N/A | 1 |
| `iojh` | 6/6 | 2/2 | 3/3 | 3 |
| `XdB4` | 6/6 | 2/2 | N/A | 5 |
We provide **10 new experiments** and attach a **rebuttal PDF**. The new results are *consistent* with the main text and further support TabEBM’s effectiveness. We will add them to the revised manuscript. Below, we detail the new experiments for each reviewer.
1. `[tdoQ, XdB4]` Evaluation on six new, leakage-free datasets with mixed feature types.
2. `[Cc3y]` Ablation studies on the surrogate binary classification task w.r.t. the imbalance between the negative samples and the real samples.
3. `[iojh]` Evaluation on highly imbalanced datasets to show TabEBM’s robustness.
4. `[iojh]` Ablation studies on TabEBM’s sensitivity to the hyperparameters of the SGLD sampling process.
5. `[iojh]` Evaluation on larger datasets to show TabEBM’s scalability.
6. `[XdB4]` Ablation studies on the distance of the negative samples and the real samples.
7. `[XdB4]` Evaluating the impact of the distribution of the negative samples.
8. `[XdB4]` Investigation into TabPFN's logit values on the surrogate binary tasks.
9. `[XdB4]` Ablation studies on tuning the hyper-parameters of generative benchmarks.
10. `[XdB4]` Ablation studies on tuning the downstream predictors.
Pdf: /pdf/336ab4b7ec24f322205e075c817a1e1e732be2dd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
gRNAde: Geometric Deep Learning for 3D RNA inverse design | Reject | Summary: gRNAde is a graph neural network designed to address the RNA reverse folding problem, a significant challenge due to the potential of RNA as therapeutic modalities and their unique data properties. RNA molecules have lower thermodynamic stability compared to proteins, resulting in fewer training samples, and their increased flexibility means multiple final states are possible. gRNAde addresses these issues by proposing a custom multi-graph representation and extending message passing to operate independently on each conformer while sharing an adjacency graph. The authors thoroughly explore evaluation techniques, comparing their model performance against Rosetta by assessing the percentage of native sequence recovery on held-out sequence families. Additionally, they demonstrate slightly improved performance when utilizing multiple conformers for model training. The authors also conduct an interesting zero-shot ranking analysis on mutation data providing a refreshing evaluation against random baselines.
Strengths: - The paper is exceptionally well written presenting a thorough overview of the challenges within modality, broader field, and the importance of the problem.
- The experimental validation assesses the utility of design choices and although the improved performance in the presence of multiple conformers isn't large (the authors don't report statistical significance), the approach is promising.
- The authors conduct a variant effect evaluation assessing whether their model is capable of learning impact of single or double mutant sequences demonstrating convincing improvement over random baselines.
Weaknesses: - I find the arguments regarding gRNAde perplexity being correlated with recovery to have limited support in the current presentation. In figure 2 (b) color denotes perplexity instead of one of the axis making it very challenging to assess the correlation. In addition the authors don't report a correlation value or its significance.
- The authors only use random baselines for the retrospective variant effect analysis. Including another reverse folding model or a metric from Rosetta similar to gRNAde's perplexity could strengthen the evaluation.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Can the authors conduct a study assessing the impact of the quantity of training data on gRNAde? As structural data increases, it will be useful to understand how gRNAde's performance scales with data abundance.
- What are the other possible bottlenecks currently inhibiting further performance improvements of multi-conformational models? Do the authors believe further advancements in architectural designs are required to demonstrate larger improvements between models capturing a single vs multi-conformer information?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: - The authors effectively discuss the current evaluation limitations and the difficulty of assessing the novelty and ground truth recovery of generated sequences.
- There is limited discussion on the data limitations in the current field. With only 4000 sequences, training points are very few, presenting a major challenge.
- The authors could include a brief statement on the broader impacts, such as the potential design of harmful molecules. As these models improve, the dual-use concern becomes legitimate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging and actionable review! We believe our revised paper will be strengthened by incorporating your suggestions. We hope our rebuttal further addresses your questions and concerns.
> Question 1
- We have ablated the inclusion of long (primarily ribosomal) RNAs in gRNAde’s training data, which also serves to tell us how # training samples and maximum length impact performance. See Appendix D, Ablation Study and in particular the results shaded in yellow under ‘Max. train RNA length’. Appendix Figure 15(a) is also relevant to show the length distribution.
- Number of training samples corresponding to maximum length cutoffs:
- cutoff @ 500 → 2607 samples
- cutoff @ 1000 → 2876 samples
- cutoff @ 2500 → 3467 samples
- cutoff @ 5000 → 4022 samples
- Overall, we found that (somewhat unsurprisingly) using more data and learning from ribosomes generally improves performance. At cutoff length 500/1000, there is noticeable drop in performance. We eventually chose to report our main results for models trained on all data (cutoff at 5000) as it lead to models with the lowest perplexity, which is akin to ability to ‘fit’ the data distribution.
- We will include these additional details from the rebuttal in the ablation study in our revised manuscript.
> Question 2
- See global response on ‘Why are performance improvements for multi-state gRNAde marginal’. In summary, the performance gains are marginal b/c of our challenging split, lack of very dynamic RNA in the training set, and difficulty of the task itself. Despite this, we believe there is some signal that multi-state architectures improve performance in both single-state and multi-state settings by explicitly extracting information about RNA dynamics.
- We have also included results for more expressive multi-state pooling methods (Deep Symmetric Sets) in the new PDF attached – unfortunately, it did not improve performance but we believe this will be interesting to readers.
> Weakness 1
- In the new PDF attached with the rebuttal, we have added **regression plots** measuring gRNAde perplexity vs. sequence recovery as well as 3D self-consistency metrics. We made plots for both the 14 RNAs from the Rosetta benchmark in Figure 2, as well as across all 100 RNAs from the test set (16 designs each → 1600 points in each plot). We measured correlation coefficients and MAE/RMSE of regression in each plot.
- We found weak positive/negative correlation (depending on metric) as measured by Pearson/Spearman correlation coefficients of +- 0.4 to 0.5. Recovery is more correlated than structural self-consistency (which is also harder to measure due to limitations of the structure predictor itself).
- Besides correlation, the visualizations also suggest that perplexity < 1.2 is indicative of good designs in terms of both recovery and structural self-consistency. There are clearly far more good designs at low perplexities below 1.2 compared to higher values.
- Thanks for highlighting this – **we think it will make for an important addition to the revised manuscript**.
> Weakness 2
- We acknowledge your point that we could have used stronger baselines than random, but our goal was to show a new capability of gRNAde that (to our knowledge) **has not been explored at all for RNA previously** in the literature.
> On limitation point 2
- We have noted that paucity of RNA structural data is a major limitation and challenge for modellers in our Introduction.
- Obviously, more data is better, but the inverse folding task is inherently local so we should be thinking about quantity of data in terms of # of unique tokens/nucleotides (different from structure prediction, which is a more global task). In that regard, we feel 4K RNAs with an average length of 100-150 nucleotides is a sufficiently large dataset to develop at least a **useful** inverse folding tool for the community (eg. outperforms Rosetta and can be useful for fitness ranking, too).
> On limitation point 3
- That’s a fair point – we will append the societal impact section of the NeurIPS paper checklist (point #10) to make a statement on potential dual-use: “We hope that our tools contribute to the development of RNA-based therapeutics towards improving health outcomes and biotechnology applications. However, it is worth noting that generative models for biomolecule design can be misused for the development of harmful molecules for negative use cases.”
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for additional experiments and explanations. I think the paper is very well written, and expect that the additional resources in the form of notebooks and other code provided by the authors, can spur further innovation on this important problem.
It's interesting that across all the author evaluations, there is a sharp performance increase at perplexity < 1.2. This is a great work and I recommend acceptance. I have increased the overall score
---
Reply to Comment 1.1.1:
Title: Thank you for responding
Comment: Thank you for responding to the rebuttal. We're happy to hear that it further addressed your concerns. | Summary: This paper proposes a geometric RNA design model. Specifically, it introduces multi-stage GNN to encode multiple conformations and aggregate these candidates, and further feed decoder to predict probabilities of a set of candidate sequences.
Strengths: 1. This work creates a new dataset for RNA inverse design, with diverse properties such as sequence length, number of structures, and structural variations.
2. This work designs a multi-state RNA reverse design model, distinct from existing methods.
Weaknesses: 1. The technical contribution appears to be somewhat weak. The backbone used is from existing GNN models for equivariant design.
2. Some technical details are not claimed. For example, are the comparison baselines retained on the new dataset or simply tested on their released model.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is nice that they compared several SOTA baselines. However, I'm not sure if they 're retraining these baselines on the new dataset or just testing their released model.
2. If I understand the claimed multi-state concept correctly, they are multiple conformations of a single RNA as input. It seems that multi-state design is an ensemble fusion of candidate representation. So, for a specific RNA with multiple conformations, does multi-state design always achieve better metrics than single-state design?
3. I am still concerned about the technical insights of this paper as NIPS is highlight on the technical contributions. And I note that this paper has been accepted in ICML23 workshop.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: This paper has no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your actionable review. We think details in our appendix and rebuttal responses address several of your questions and concerns – please do consider revising your score if you find the responses satisfactory and let us know if there is something we can further clarify.
> Weakness 1 and Question 3
- See global response on ‘Lack of architectural novelty’.
- The other reviewers have positively noted the following **new technical contributions**:
- Careful data preparation and experimental setup as well as evaluation (vioU, h3v6)
- New design capabilities: Improved inference speed and accessibility over physics-based tools for RNA design (h3v6); Zero-shot mutant fitness ranking (E3ka, QQ46)
- Please reconsider the suitability of our work for NeurIPS. The official Call for Papers clearly states: “We invite submissions presenting new and original research on topics including but not limited to the following: … **Applications**, **Machine learning for sciences**.” We believe our work is well within the scope of NeurIPS.
> Weakness 2 and Question 1
- For the experiments in Figure 2, Rosetta and the others methods are not ML models but rather physics based software which **do not require training**. We reported the numbers stated in [Das et al. 2010](https://www.nature.com/articles/nmeth.1433), Supplementary Table 1. In more detail:
- As noted in line 236-237, we have followed the evaluation protocol established by Rhiju Das et al. in their Nature Methods paper which first pioneered 3D RNA design: (1) we have evaluated our model on the same set of 14 high quality RNA 3D structures of interest from the PDB, and (2) for the physics-based baselines, we have reported the numbers from Das et al. 2010. See Appendix E, Table 2 for full results per RNA.
- The ViennaRNA 2D-only baseline has been evaluated by us using the ViennaRNA python package. This is a thermodynamics-based method which does not involve any training, either. We will make a note of this in our revision – thank you!
- In summary, the only method requiring training is gRNAde (our work), and we have carefully prepared data splits to evaluate for generalization and ensure fair comparison to these classical baselines (detailed in line 201 onwards).
- All the other experiments involve baselines and models that have been trained from scratch by us using the same experimental settings and evaluation protocols as our final model.
- **Please let us know any other minor/major experimental details that you felt were missing**. We will promptly clarify – we want to make the experimental protocol and setup completely transparent and reproducible (along with a detailed and documented codebase).
> Question 2
- Your understanding of the multi-state setting is exactly correct!
- Yes, we show that multi-state gRNAde architectures marginally improve performance for multi-state RNAs in Figure 4:
- Figure 4(a) shows improvements in aggregated performance across 100 test set samples (all of which have multiple conformational states).
- Figure 4(b) shows **why** there is improvement: at a per-nucleotide level, multi-state gRNAde has better performance for nucleotides that are locally more flexible, undergo changes in base pairing within multiple states, and are on the surface.
- We would also like to share another interesting finding: multi-state gRNAde also improves performance on the single-state test set. See Appendix D, ‘Ablation Study’, line 600 onwards (in the table, see the columns for sequence recovery as well as 2D/3D self-consistency for the ablated variants highlighted in red). Thus, even for test set RNAs that have only one state available, training gRNAde in a multi-state manner can lead to better inverse folding performance.
- What do we understand/take away from these results? Multi-state training seems to allow gRNAde to better understand RNA dynamics and conformational changes.
---
Rebuttal Comment 1.1:
Comment: My main concerns are addressed. Thanks for your responses.
---
Reply to Comment 1.1.1:
Title: Thank you for responding
Comment: Thank you for acknowledging the rebuttal -- we are happy to hear your main concerns are now addressed.
Would you consider increasing your score to reflect that? | Summary: This paper introduce gRNAde, a geometric deep learning pipeline for RNA sequence design conditioned on one or more 3D backbone structures. gRNAde is superior to the physically based Rosetta for 3D 320 RNA inverse folding in terms of performance, inference speed, and ease of use. The method demonstrates significant superiority across various experiments.
Strengths: 1. The authors introduce gRNAde, the first work to consider multi-state biomolecule representation. This study explores the feasibility and specific experimental results of using multi-state biomolecule representation, providing new ideas for researchers in the field.
2. The authors conduct extensive experiments and analyses on multiple datasets and experimental settings, demonstrating the model's effectiveness from various perspectives, especially regarding the "Zero-shot ranking of RNA fitness landscape" experiment, which is currently lacking in this field.
3. The authors present various experimental details using numerous visualizations and data tables, making the paper easier for readers to understand.
Weaknesses: 1. The model architecture proposed by the author lacks innovation. The core structure of gRNAde is directly stacked using GVP-GNN, and the handling of multi-state conformations is merely simple stacking. Additionally, the 3-beads representation method is very common in traditional RNA 3D structure modeling, which is also not an innovation by the author. Therefore, I believe the model design is lacking.
2. The baselines compared by the author in various experiments are either outdated or too simple, such as "Rosetta(2020)" and the "random baseline" in the Zero-shot experiment. This makes it difficult to demonstrate the actual performance of gRNAde. Some recent works using deep learning to model RNA 3D structures can serve as baselines, such as [1-3].
3. The author mentions that gRNAde has a significant speed improvement over Rosetta, but the author did not run the Rosetta code themselves and instead directly cited the original Rosetta paper. I believe this point is debatable because the model's running speed is also limited by GPU computational performance. The author uses an A100, whereas the GPU used by Rosetta four years ago is obviously inferior to the A100. Therefore, the author needs to rerun the Rosetta program on the A100 to provide accurate model inference times.
[1] Geometric deep learning of RNA structure, Science 2021
[2] Physics-aware Graph Neural Network for Accurate RNA 3D Structure Prediction, NIPS workshop 2022
[3] RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design, ICLR 2024
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The second experiment results indicate that multi-state biomolecule representation did not significantly improve performance. What does the author believe is the reason for this?
2. The RNA used in the first experiment are older and limited in number. Has the author considered testing with more recently published and more numerous RNA?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors discuss practical tradeoffs to using gRNAde in real-world RNA design scenarios 330 in Appendix B, including limitations due to the current state of 3D RNA structure prediction tools.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review – please see our detailed responses below – we believe we have addressed several of your concerns and questions. Please let us know what further information we can provide to make you reconsider your vote to reject the paper.
> Question 1
- See global response ‘On marginal improvements of multi-state gRNAde’. In summary, the performance gains are marginal b/c of our challenging split, lack of very dynamic RNA in the training set, and difficulty of the task itself. Despite this, we believe there is some signal that multi-state architectures improve performance in both single-state and multi-state settings by explicitly extracting information about RNA dynamics.
> Question 2
- The 14 RNAs that are used to compare to Rosetta in Figure 2 are indeed a decade old, but we believe they are still a great benchmark because:
- They are extremely high resolution crystal structures (even by today’s standards for RNA).
- They were handpicked by Rhiju Das in their [Nature Methods paper](https://www.nature.com/articles/nmeth.1433) as being RNAs which are interesting for biologists to design and develop new versions of.
- In addition to the above, in Appendix D, we report a more comprehensive set of results on our full test set of 100 RNAs in total (Single-state split). This includes the 14 from Figure 2 as well as all recently released versions of the same RNAs and their structural homologues (see line 201 onwards for precise description of the splits).
- In summary, the model’s performance does not degrade when tested on more recent and greater quantities of RNA.
> Weakness 1
- See global response on ‘Lack of architectural novelty’. We have also included results for more expressive multi-state pooling methods (Deep Symmetric Sets) in the new PDF attached – unfortunately, it did not improve performance but we believe this will be interesting to readers.
> Weakness 2
- Some comments on your suggestions:
- *Geometric deep learning of RNA structure* – this is a structure ranking model that takes as input an RNA structure + RNA sequence, and outputs a score for how closely that structure resembles the true structure (it actually predicts RMSD) – such structure prediction and ranking models cannot be used for inverse design.
- *Physics-aware Graph Neural Network for Accurate RNA 3D Structure Prediction* – similarly, this is also a model for structure prediction/ranking and it is inapplicable to the inverse folding problem. We will cite it in our revision.
- *RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design* – please see global response on ‘Comparison to RDesign’ for why apples-to-apples comparison to their work is not possible.
- We have done a literature review in Appendix A where we have discussed deep learning for RNA structure modeling, why current tools cannot be used for inverse design, and how gRNAde is contextualized within the broader literature.
- We believe Rosetta is a highly relevant baseline as it is the state-of-the-art in physics-based modeling of RNA. Yes, we realize that the Rosetta results are a decade old, but the reason for this is that 3D RNA design has not received as much attention from the Rosetta community as 3D protein design.
> Weakness 3
- Unfortunately, it is not possible to use RNA design recipes in the latest Rosetta builds (we did try).
- A major limitation of Rosetta recipes is that many of them do not use GPUs (and this is a major advantage of new deep learning based alternatives). Rosetta RNA design recipes do not use GPUs, which is why they are so slow. Most of the Rosetta recipes are also just inherently slow b/c they use MCMC sampling and need to iterate until convergence, whereas deep learning models are one-shot predictors/generators.
- [Tmol](https://github.com/uw-ipd/tmol) is an ongoing effort by Institute of Protein Design to port Rosetta functionality to GPUs in a differentiable manner. However, as you can see from the github this is a very early effort which is still in development without any documentation.
- **This is all the more reason for releasing gRNAde in an open source manner and easy to access via notebooks and tutorials.** We hope that making these datasets and tools more broadly accessible will invite renewed attention to 3D RNA design.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. However, I still have the following concerns:
1. In fact, a comparison with the three papers I mentioned is feasible, even though the tasks they address differ from yours. You use a stacked GVP architecture, but theoretically, the GVP could be replaced with the architectures from those three papers. The fundamental difference lies in the models used to represent the RNA 3D structure. As for RDesign, the lack of training code and the specifics of the data split do not prevent the transfer of the model architecture. Therefore, I believe that simply comparing with a random baseline is not persuasive.
2. Regarding the use of DSS, it essentially applies an existing method (DSS) to your dataset, but this does not constitute an innovation in your model architecture. In my view, this paper still lacks sufficient innovation in terms of the modeling approach.
3. Concerning the use of Rosetta, while it is true that most methods within Rosetta rely on MCMC sampling and run on CPUs, this means that you cannot claim faster performance in comparison, as that would be unfair. You should compare your methods with other deep learning methods which run on GPU.
4. For the 3D RNA inverse design task, you are not the first to define or provide a dataset for this problem. RDesign also addresses this task and presents a more novel modeling approach. The main difference lies in the use of the multi-state (please correct me if I misunderstood).
If I have misunderstood any of the above points, I would greatly appreciate it if you could clarify. If you can provide more detailed answers to these questions, I would be happy to consider raising my score.
---
Reply to Comment 1.1.1:
Title: Thank you for responding
Comment: Thank you for acknowledging the rebuttal and wanting to engage in discussion.
> On point 1
- We have in fact ablated the architecture of the GVP GNN used within our model. Please see Appendix D, line 587 onwards and the results in green. We compared using a rotation invariant GNN vs. rotation equivariant GNN, which we believe is the **fundamental concept when building geometric 3D GNNs** that should bring insight to readers (as opposed to what specific choice of layers one makes which differs a lot from paper to paper).
- On RDesign, we disagree -- we have seen repeatedly in structural biology that **using random data splits will give the impression that models are working well, but that their o.o.d. performance will be greatly over-exagerated** (examples: [1](https://arxiv.org/abs/2308.05777) [2](https://openreview.net/forum?id=A8pqQipwkt) [3](https://arxiv.org/abs/2206.12411) -- it is very simple to delude ourselves with random splits).
- RDesign seems overfit on its training data based on [the logs](https://github.com/A4Bio/RDesign/blob/master/checkpoints/log.log): their training perplexity is order of magnitude lower than validation perplexity even on random splits...
- Running RDesign on our splits is guaranteed to have data leakage.
- Also, **we did ablate the fundamental conceptual difference between RDesign and gRNAde** -- autoregressive vs. non-autoregressive decoding -- in Appendix D, line 591 onwards. We find that autoregressive decoding is better for structural self-consistency, which we care more about in real-world design scenarios. We think this finding will be super interesting to the community working on other types of biomolecules, too, as their is debate on the two approaches to decoding.
- Finally, if you really want to compare numbers, our model's recovery rates are in the 50s for the single-state split. RDesign's recovery rates are in the 40s in their Table 2.
> On point 2
- Adapting existing architectures to new problems, doing the experiments rigorously, setting the first benchmarks for a field, and releasing open-source resources so others can build upon them is a valuable contribution. This is stated in the NeurIPS Call for Papers, too.
- It is valuable for the community to know how to apply our best tools work to new problems. Doing applied work well requires strong understanding of the application domain as well as the deep learning architectures and evaluation, but may not always involve new architecture ideas.
- It would obviously have been nice for the novelty of our paper if we could have proposed a fancy multi-state fusion method. However, we benchmarked many ideas rigorously and did not find them to bring improvements over Deep Set. We should not be penalized for this.
> On point 3
- The point of those experiments is to demonstrate that deep learning tools perform better and are **more broadly accessible** that Rosetta RNA (its latest build cannot even run RNA design...). We tried to compare as fairly as possible to Rosetta by being careful about the splits.
- There is a lot of precedent for deep learning papers claiming superior speed and performance to Rosetta in the same manner as us for other tasks, such as inverse folding, structure generation, rotamer optimization, etc. (Examples [1](https://www.science.org/doi/10.1126/science.add2187), [2](https://www.mit.edu/~vgarg/GenerativeModelsForProteinDesign.pdf), [3](https://www.pnas.org/doi/full/10.1073/pnas.2216438120) and many structure prediction papers rely on this claim)
- We do take your point though and we will be more careful about our Rosetta speed claim. We propose to revise it to add caveats everywhere in our paper, for instance:
- Line 60: "gRNAde is significantly faster than Rosetta for inference; e.g. sampling 100+ designs in 1 second for an RNA of 60 nucleotides on an A100 GPU, compared to the reported hours for Rosetta **on CPUs, making gRNAde more broadly accessible**.
- Line 246: "Rosetta takes order of hours to produce a single design due to performing expensive Monte Carlo optimisations **on a CPU**."
> On point 4
- RDesign and our work were developed concurrently but **they beat us to publication**. These kinds of situations are very difficult for authors and we have tried to provide an honest and detailed comparison between the two works in line 507 onwards as well as the ablation study experiments we already pointed you to.
- RDesign did not release training code -- this makes it impossible to reproduce their study.
- RDesign's evaluation makes use of random splits (we already highlighted why this is has issues), does not compute any metrics beyond recovery, and does not compare to Rosetta or physics-based tools.
- We think our study brings significant new insights to the community and can hopefully be complementary to the RDesign paper. We believe these are concurrent works and one being published first should not be the reason to stop the publication of the other.
---
Rebuttal 2:
Title: Now trying to run RDesign
Comment: We also wanted to add an additional note: After your comment, we are now trying to run RDesign to see if we can load the datasets and checkpoint, as well as run inference with it.
1. There are **no installation instructions** available in the repository: https://github.com/A4Bio/RDesign/
2. The dataset files released by them are **corrupted** (we tried loading them on a Macbook, a linux server, and on Github Codespaces). You can even try it quickly on your end on your browser:
- Open a GitHub Codespaces on their repository.
- Download the dataset files they have released: https://github.com/A4Bio/RDesign/releases/tag/data
- Upload any of the files to the Codespaces.
- Open a terminal or create a new jupyter notebook in the Codespaces and type the following:
```
import torch
torch.load("<path-to-data>/<data_file>.pt") # or torch.load("<path-to-data>/<data_file>.pt", map_location='cpu')
```
...which will always lead to the following error:
```
RuntimeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 torch.load("val_data.pt", map_location='cpu')
File ~/.local/lib/python3.10/site-packages/torch/serialization.py:1040, in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
1038 except RuntimeError as e:
1039 raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
-> 1040 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File ~/.local/lib/python3.10/site-packages/torch/serialization.py:1264, in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
1262 magic_number = pickle_module.load(f, **pickle_load_args)
1263 if magic_number != MAGIC_NUMBER:
-> 1264 raise RuntimeError("Invalid magic number; corrupt file?")
1265 protocol_version = pickle_module.load(f, **pickle_load_args)
1266 if protocol_version != PROTOCOL_VERSION:
RuntimeError: Invalid magic number; corrupt file?
```
This happens for every data file they have provided.
We will keep you posted if we manage to run the code, but these two points should already tell you why direct apples-to-apples comparisons are so difficult in this situation. Without being able to load the processed dataset files, and without any documentation in the README or within the code as to what is the expected format of input data to their model, it is not possible to re-train or do inference with this model.
---
Rebuttal 3:
Title: Further issues with RDesign
Comment: Another issue that makes direct comparison with RDesign impossible is that **their model does not implement sampling** during inference.
- Their model class contains a method called `sample()`: https://github.com/A4Bio/RDesign/blob/master/model/rdesign_model.py#L76 -- however, this method does not actually implement any sampling from the probability distribution outputted by the model. It just directly outputs the probability distribution.
- This would explain why RDesign's perplexity numbers are non-sensical (Appendix E.3 of their paper, as well as just the training logs for their model).
- Not being able to perform sampling means that the model will be useless in real design scenarios (especially as it is non-autoregressive/one-shot independent decoding per token), because it will keep outputting the same probability distribution and inherently cannot be used to generate diverse sequences as a result.
Is the reviewer somewhat convinced now that:
1. Our work brings something new to the community and is a valuable contribution?
2. Direct comparison to RDesign is not possible given the limitations of both their experimental methodology as well as their reproducibility?
---
Rebuttal Comment 3.1:
Title: Got RDesign to work; results are very poor
Comment: After a lot of effort, we figured out how to make the RDesign code perform inference.
We have made direct comparisons to RDesign in this comment: https://openreview.net/forum?id=Fm4FkfGTLu¬eId=RdkCCdrrqx - RDesign underperforms gRNAde and Rosetta. | Summary: This work designed gRNAde, a geometric deep learning pipeline for RNA sequence design conditioned on one or more 3D backbone structures. To achieve this, the authors created single-state and multi-state 3D RNA structure datasets, built a geometric graph representation, and proposed an architecture consisting of a multi-state GNN encoder, a pooling layer, and a autoregressive decoder. The single-state RNA design, multi-state RNA design, and zero-shot rank experiments were conducted and results show that gRNAde outperformed all previous methods including Rosetta.
Strengths: 1. The datasets were carefully designed. Only structures with high resolution were maintained. Two kinds of clusters were used to split train, validate, and test sets where the hard samples were split into test sets.
2. The model architecture makes use of information from multiple conformations. This is achieved by sum or average pooling.
3. The experiments were conducted fairly. Datasets were split carefully. The results were averaged on 16 sampled sequences across 3 random seeds.
4. The inference speed is much faster than traditional methods. This makes it possible to be used in High throughput screening. The zero-shot ranking ability is also an advantage.
Weaknesses: 1. The model architecture has no novelty. All components are token from previous work and the overall structure is very similar to that of ProteinMPNN. The multiple conformations are processed independently and the representations are simply averaged or summed, which may not grasp all information.
2. The model is trained on only about 4 thousand RNA sequences. These sequences are too few to cover the entire space. RNAs with no 3D structures should be exploited, as done in alphafold3.
3. The results about single-state RNA design were reported on only 14 samples. More samples should be used to test the model. For example, the results on test sets with 100 samples should be reported.
4. The improvements on multi-state RNA design task are limited. The native sequence recovery is obviously lower than the results of single-state design task.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Lots of the RNA conformations come from protein-RNA complexes and DNA-RNA hybrids, which means these conformations do not appear alone. Since the model encoder cannot consider other molecules, the extracted representations are biased. How about adding a scalar feature to indicate if a nucleotide is on the interface with another molecule?
2. Why is the decoder only designed for autoregressive decoding? The arbitrary decoding in ProteinMPNN is useful when part of the sequence is already known.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The representation ability of gRNAde remains to be verified. Usually, the representation is extracted when all nucleotides are known, so the architecture for inverse folding problem may not suitable for representation learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your actionable review. We hope that our rebuttal answers your questions sufficiently and makes you reconsider some points that you noted as weaknesses. Please do consider revising your score if you find the responses satisfactory and let us know if there is something we can further clarify.
We would also like to incorporate your suggestion on arbitrary decoding order into our revision -- thank you for it!
> Question 1
- We can certainly do that and we imagine this will improve models’ performance at designing interfaces. We are developing the next version of gRNAde which goes beyond just scalar features and instead considers nodes from the interaction partners together with the RNAs nodes. We hope to use this model for binding-related tasks such as aptamer design. However, this is still **work in progress**.
> Question 2
- It is very simple to do arbitrary decoding orders for autoregressive decoding b/c we can simply permute the inputs during training and inference. And we agree that arbitrary decoding order is extremely useful in real design scenarios where we are usually given partial sequences.
- In the new PDF attached with the rebuttal, we have added results for gRNAde trained with random decoding order (shaded in green) and seen that this leads to only minor decreases in sequence recovery and 3D self-consistency. Perplexity gets significantly worse as it is more challenging for the model to fit the training data. The same observations were made in the ProteinMPNN paper.
- Thanks for the suggestion – **we will include these results in our updated manuscript.**
> Weakness 1
- See global response ‘On architectural novelty’. We have also included results for more expressive multi-state pooling methods (Deep Symmetric Sets) in the new PDF attached – unfortunately, it did not improve performance but we believe this will also be interesting to readers and want to include it in the ablation study.
> Weakness 2
- Obviously, more data is better, but the inverse folding task is inherently local so we should be thinking about quantity of data in terms of **# of unique tokens/nucleotides** (different from structure prediction, which is a more global task). In that regard, we feel 4K RNAs with an average length of 100-150 nucleotides is a sufficiently large dataset to develop at least a **useful inverse folding tool for the community**. As evidence of this:
- gRNAde performs better than Rosetta, the best physics based tool for 3D RNA design.
- gRNAde was found useful for ranking Ribozyme fitness in our retrospective study.
- Based on the Ribozyme retrospective study, we felt confident enough to send gRNAde’s designed sequences for the same Ribozyme for experimental validation to our collaborators.
- AlphaFold 3 only came out 1 week prior to the NeurIPS deadline – we will certainly consider using their ideas in our future work but please don’t hold it against us for not using RNAs without 3D structure!
> Weakness 3
- We have already done what you have asked. We elaborate below:
- Figure 2 does use only 14 RNAs to compare to Rosetta, but we believe they are still a great benchmark because:
- They are extremely high resolution crystal structures (even by today’s standards for RNA).
- They were handpicked by Rhiju Das in their [Nature Methods paper](https://www.nature.com/articles/nmeth.1433) as being RNAs which are interesting for biologists to design and develop new versions of.
- In addition to the above, in Appendix D, we report a more comprehensive set of results on our full test set of 100 RNAs in total (Single-state split). This includes the 14 from Figure 2 as well as all recently released versions of the same RNAs and their structural homologues (see line 201 onwards for precise description of the splits).
> Weakness 4
- See global response ‘On marginal improvements for multi-state gRNAde’. In summary, the performance gains are marginal b/c of our challenging split, lack of very dynamic RNA in the training set, and difficulty of the task itself. Despite this, we believe there is some signal that multi-state architectures improve performance in both single-state and multi-state settings by explicitly extracting information about RNA dynamics.
> On the stated limitation
- We agree that inverse folding generative models are not suitable for representation learning/predictive tasks. It has also been seen for proteins that models for inverse folding are not useful for representation learning (eg. [this paper](https://www.biorxiv.org/content/biorxiv/early/2022/11/21/2022.05.25.493516.full.pdf) by Kevin Yang) -- our goal is inverse design and not representation learning.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for taking the time and effort to answer my questions, but I want to keep my neutral score. Although the performance seems ok, I still think the contribution of this paper is more like an extension of existing architecture on RNA-related tasks.This neutral score means that if other reviewers and AC lean to accept this work, I would not oppose it.
---
Rebuttal 2:
Title: Thank you for responding
Comment: Thank you for acknowledging the rebuttal. We hope you will reconsider, given that we put significant effort to address your review and we think it did improve the paper overall.
Adapting existing architectures to new problems, doing the experiments rigorously, setting the first benchmarks for a field, and releasing open-source resources so others can build upon them **is a valuable contribution**. The NeurIPS Call for Papers states: “We invite submissions presenting new and original research on topics including but not limited to the following: … **Applications**, **Machine learning for sciences**.” Doing applied work well requires strong understanding of the application domain as well as the deep learning architectures and evaluation, but may not *always* involve new architecture ideas.
We addressed as many of your weaknesses and questions as we could via the rebuttal.
- Weakness 1: We tried more expressive set pooling methods, but it often happens in structural biology applications that complex architectural ideas **do not generalize** to o.o.d. test sets. It would obviously have been nice for the novelty of our paper if we could have proposed an exciting, new multi-state fusion method. However, we benchmarked many ideas rigorously and did not find them to bring real improvements.
- Weakness 2: We basically processed and prepared ML-ready datasets from **all the structured RNAs available publicly** on the internet; we couldn't use the AlphaFold3 self-distillation idea b/c it was not available when we did the work. We hope this is not counted as a weakness in your assessment.
- Weakness 3: We actually did use a lot more test samples and reported results with confidence intervals in the appendix. We hope this is also not counted as a weakness in your assessment.
- We can use our models with arbitrary decoding order, too. We did this experiment that you asked for, too. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for their feedback and actionable suggestions! Everyone highlighted the following positives:
- Careful data preparation and experimental evaluation (vioU, h3v6, E2ka)
- Introduction of multi-state design and representation learning (E2ka, h3v6, Hi6M)
- New design capabilities: Improved inference speed over physics-based tools (h3v6); Zero-shot mutant fitness ranking (E2ka, QQ46)
- Clear presentation and writing (vioU, E2ka, QQ46)
We have individually responded to each reviewer’s questions and concerns. We have also addressed questions common questions below.
---
## Comparison to RDesign
- Please see Appendix A, ‘Comparison to contemporaneous work’ for details re. why apples-to-apples comparison to RDesign is not possible (**no training code** and **use of random splits**, primarily). We state in detail the differences to our work in terms of methods, evaluation, and open science.
- Additionally, in Appendix D, ‘Ablation Study’, we ablate gRNAde’s architecture and compare the performance of our autoregressive model in the main paper with a **non-autoregressive variant** -- this is what RDesign uses in their architecture.
- It is very interesting that non-autoregressive decoding improves sequence recovery but autoregressive decoding has significantly higher 2D and 3D self-consistency scores (which we care about more in real-world design scenarios). We have provided more details in the appendix.
- We think this finding (and others from the ablation study) will be interesting to others working on biomolecule inverse folding.
- We also think these works developed concurrently but that their paper got published at a conference first. We hope this will not be held against us.
---
## On architectural novelty
- Our contributions focus on a **new application for geometric deep learning** (3D RNA design), developing careful datasets and splits, as well as rigorous experimental protocols. This aligns with broader trends across deep learning, emphasizing thorough experimentation with existing models over architecture engineering.
- In Appendix D, we ablated the our architecture design of the encoder-decoder GVP-GNN and found **new insights** about inverse folding models. These architecture have been extremely successful for protein design (eg. ProteinMPNN uses it and has lead to a $1B startup), so we think our findings will be of broader interest.
- In the new PDF attached with the rebuttal, **we explored a new multi-state pooling function** based on [Deep Symmetric Sets](https://arxiv.org/abs/2002.08599), which is **provably more expressive than Deep Sets** when pooling over a set of features which themselves have symmetry constraints (in our case, we are pooling features from a set of RNA conformations, each of which is roto-translation and permutation equivariant). Results are shaded in blue, compared to red for Deep Sets. DSS **does not significantly improve performance** over DS on the test set, although it fits the training data better. Expressive architectures overfitting and **not generalizing to out-of-distribution data splits** has been a repeated trend across ML for structural biology, which we think further justifies our decision to keep the architecture simple.
- We think that establishing rigorous protocols and releasing **open source code** will lead to others developing better RNA inverse folding architectures in the future.
---
## On marginal improvements for multi-state gRNAde
- We agree that the results in Figure 4 show that multi-state gRNAde architectures marginally improve performance for multi-state RNAs. Just to reiterate:
- Figure 4(a) shows overall improvements in aggregated performance across 100 test set samples, eg. sequence recovery from 0.455 → 0.484.
- Figure 4(b) shows **why** there is improvement: at a per-nucleotide level, multi-state gRNAde has better performance for nucleotides that are locally more flexible, undergo changes in base pairing within multiple states, and are on the surface.
- Why is the performance gain only marginal? We believe there are several reasons:
1. **Relatively fewer multi-state training samples**: There are multiple states available for 1.5K sequences out of 4K+ (see Appendix Figure 15 for visualizations). Out of these, the most interesting and highly dynamic RNAs (with larger RMSDs between states) are actually assigned to our validation and test sets during data splitting, so models are trained on less dynamic RNAs and are being **evaluated for their generalization capability** to highly dynamic RNAs.
2. **Difficulty of the multi-state task itself**: In addition to point (1), we can further see why the multi-state split is such a difficult task by comparing the ‘Groundtruth sequence prediction baseline’ in Appendix D, Table 1 for the single-state and multi-state split.
3. **Challenge of evaluating multi-state design**: Structural self-consistency metrics are not ideal for evaluating RNAs which undergo changes to their structure; it would perhaps be more principled (but extremely slow, expensive and intensive) to run MD simulations to validate our multi-state design models.
We would also like to share another interesting result from the appendix: **multi-state gRNAde also improves performance on the single-state test set.** See Appendix D, ‘Ablation Study’, line 600 onwards (in the table, see the columns for sequence recovery as well as 2D/3D self-consistency for the ablated variants highlighted in red). Thus, even for test set RNAs that have only one state available, training gRNAde in a multi-state manner can lead to better inverse folding performance.
Overall take away: Multi-state training seems to allow gRNAde to better understand and extract information about RNA dynamics and conformational changes, as shown by:
- Improved performance on single-state and multi-state sets.
- Source of improvement comes from nucleotides that are more structurally flexible.
Pdf: /pdf/c6111d5379ec02e6a23f714498812a85ad6d82f1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduced a multi-state geometric graph neural network for the RNA inverse folding problems. Experiments are conducted on carefully splited structural datasets that avoid data leakage. The results have shown convincing performance improvement over the physics-based methods such has FARFAR and Rosetta which are commonly used for RNAs.
Strengths: - This paper is well written and pleasing to read. Explanations on key biological concepts related to RNAs, and how they motivate the model design as well as experiment setup are adequate and above all, clear.
- Evaluation metrics are well considered. The self-consistent scores on the secondary and tertiary levels are meaningful.
- limitations on self-consistence scores are acknowledged in the main text. For RNAs, many challenges are unique especially when they are compared to proteins. Therefore, clarifications and precautions are particularly needed for RNA related tasks. I appreciate the authors’ effort for stating these limitation clearly in the main text.
Weaknesses: - Comparison to contemporary deep learning models for RNA inverse folding is limiting
- RDesign (https://openreview.net/forum?id=RemfXx7ebP) for example is a recent deep learning based method for 3D RNA inverse design
- For inverse design on the secondary structure level there are many more options — a lot of them are better than RNAinverse from ViennaRNA. I would suggest checking out this survey (Design of RNAs: comparing programs for inverse RNA folding) and include a few other more competitive baselines.
- For the self-consistency scores, I personally doubt if RhoFold (also called e2efold-3d) is reliable software for RNA tertiary structure prediction, since it is from the same group that published e2efold which is a spectacularly awful RNA secondary structure predictor (I personally would avoid using any of their tools; checkout its Github issues, and also followup works on RNA secondary structure predictions that have compared with e2efold). Have the authors used more recent folding softwares such as RosettaFoldNA and AlphaFold3?
- Would using different structure predictors significantly impact the results? This also includes EternaFold. How would gRNAde hold up against the baselines when RNAfold or LinearFold is used to compute the self-consistency scores on the secondary level?
- Data splits (train, validation and test) are carefully constructed so that the evaluation is not contaminated by data leak. But I wonder if the TM-score cut-off at 0.45 is too lenient? Is it still possible to have similar structures between training and test sets under this threshold?
Line 258 to 259. The argument would be more compelling if the inverse design operates at the quaternary level which would include information about ligand structures.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Why would a perfect model has a perplexity of 1 (i.e. mapping each structure to a single sequence)? The solution for RNA inverse folding should be an one to many mapping, because for a target backbone structure there are potentially many sequences that can fold into the same structure. Likewise, a perplexity of 4 doesn’t mean the predictions are nonsense. For unfolded RNAs (e.g. a linear chain), they very likely correspond to random RNA sequences.
- Conformers would indicate that the structures are the local energy minimizers on the potential energy surface. How would you know if these multi-state conformations are really conformers? Where are multi-state conformations from?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: - Comparison to contemporary models for RNA inverse folding is a bit hollow. It would be more meaningful if some deep learning based baselines can be included into the comparison.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review! We think that incorporating your comments will strengthen our revised paper and we hope our responses address your questions in the best way possible.
> Question 1:
- To clarify, ‘perfect’ in this sentence is from a machine learning context, not from an applied context. So given an RNA backbone as input, a perfect ML model will output its groundtruth sequence from the PDB, which it would have memorized (leading to perplexity = 1).
- However, such a perfect ML model is completely useless from an applied perspective, because we never want to perfectly recover the groundtruth sequence from the PDB. We want to sample a diverse set of designs which are reasonably close to the PDB sequence while remaining interesting/non-trivial.
- Perplexity = 4 would mean that the model is randomly selecting from the 4 bases for each position in the sequence, so your understanding is correct.
> Question 2:
- We re-visited basic chemistry definitions and agree that ‘conformer’ is a term reserved for conformational states at which a molecule's energy is at a local minimum. **We will revise our paper to not use this term** – thank you!
- The multiple conformations that we use are from multiple deposited structures for a given PDB entry, or multiple PDB entries for the same RNA, or both. The answer to whether they are at local energy minima really depends on who one asks, in our opinion. Our current understanding is that it remains an open question whether crystal structures of biomolecules deposited in the PDB are truly energy minimas or just some frozen states/artifact of crystallization.
- See Appendix Figure 15 for some statistics on # of multi-state RNAs, # of states per RNA, whether # of states is correlated with length (no), etc.
> Weakness 1 and Limitation 1:
- Please see the global response on ‘Comparison to RDesign’ for why apples-to-apples comparison is not possible. We have also done extensive ablation studies of our architecture as a way to understand the impact of different components on performance.
- 2D-only design techniques are meant for different types of RNA design problems than what Rosetta/gRNAde are intended for. They can not incorporate 3D information about kinks and turns and motifs explicitly (b/c they optimize for only maintaining the same base pairing in designed sequences; they are also not evaluated for ‘recovery’ but rather for a ‘success rate’ of retaining the same exact base pairing).
- You can see this by comparing gRNAde variants to ViennaRNA in Appendix Table 1: ViennaRNA actually has **very good** 2D self-consistency score but poor 3D self-consistency as it is simply not designed to account for 3D structure. All other 2D inverse folding methods we are aware of also cannot account for 3D interactions, so we think it is reasonable to expect similar performance.
> Weakness 2:
- AlphaFold3’s paper was published one week before the NeurIPS deadline and its code or weights are not yet available. Our framework is flexible to swap out RhoFold for AF3 as soon as possible (we are also eager to move on as there are limitations to RhoFold's performance; see the 'Groundtruth sequence prediction baseline' in our ablation study for an upper limit on our test set).
- RF2NA is not a suitable choice as it is primarily for protein-NA complexes, not solo RNAs. We also think (intuitively) that RhoFold’s use of a language model makes it less reliant on MSAs than RF2NA (we almost never have MSAs for designed/synthetic RNAs).
- RhoFold was used in AIchemy_RNA which was used as a baseline in AlphaFold3 as ‘the top performing machine learning system’ on CASP15 targets. See Extended Data Fig. 5 in AF3 paper, where AF3 outperforms RhoFold but not outright (RhoFold is better on some targets). **So we think RhoFold is a reasonable choice at present.**
- We chose EternaFold b/c it has actually been evaluated on designed/synthetic RNAs in the wet lab before and is broadly accessible for benchmarking. We are aware that it is not the best 2D structure predictor for naturally occurring RNAs, but believe that it does a good job at recovering at least the correct base pairings (which is the goal of 2D self-consistency evaluation).
- Nobody has conducted a self-consistency score based benchmark for 3D RNA design before us, so we are unable to report self-consistency numbers for Rosetta/other baselines (it is not possible to use the RNA backbone design recipe in the latest Rosetta builds).
> Weakness 3:
- Our choice of a TM-score cut-off of 0.45 to determine whether RNAs are in the same global fold follows [US-align](https://zhanggroup.org/US-align/) and other similar works on RNA structure modeling (eg. see [recent CASP evaluation for RNA structure prediction](https://onlinelibrary.wiley.com/doi/full/10.1002/prot.26602) where the 0.45 cutoff was used). Whereas it is conventional in the proteins literature to cluster protein structures according to a 0.5 TM-score cut-off, given the wider and more flexible nature of RNA compared to protein structures, a slightly lower structural similarity cut-off for RNA is warranted.
> Weakness 4 / lines 258 and 259:
- For riboswitches, the aptamer domain that is interacting with the ligand partner is usually conserved, and we currently want to use gRNAde to re-design scaffolds around this binding domain that may be more thermally stable while retaining the switching mechanism. We think gRNAde can be somewhat useful for this already.
- But we agree that ligand conditioning during sequence design will provide greater context for multi-state design, and **we are actively developing a version of gRNAde which incorporates ligand partners**.
---
Rebuttal Comment 1.1:
Comment: I have read and appreciate the author's response.
While I appreciate the paper's writing and its clear biological motivation and analysis, I had hoped to see a more comprehensive comparison with competitive baselines. This seems to be a shared concern among other reviewers as well.
- According to [the conference policy](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ), "Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions.", so I am not sure if [RDesign](https://openreview.net/forum?id=RemfXx7ebP) satisfies that criterion, since it was published in Jan 16 2024 (although it was modified later in Apr).
- But anyways, I understand that RDesign lacks sufficient utility or means of reproduction. I appreciate any efforts the authors have put into running RDesign.
I was also hoping to see some sanity checks, in particular regarding the "consistency" of the 2D/3D self-consistency tests. Although these provide important insights into the model's performance, they are essentially based on imperfect computational models, so it is expected there will be variance in these self-consistency scores.
- This is the reason why, in my opnion, using a set of relatively good and reliable computational predictor for these self-consistency would provide more reliable performance evaluation.
- It is true that AlphaFold 3 is not publicly released but they have an online server where you can submit jobs. If they accept batch jobs, and if time permits, then I would suggest giving it a try...
- Now for the 2D self-consistency scores, I think it really doesn't hurt for you to give RNAfold or LinearFold a try. I doubt it would generate any significant impact on the results paper, but having these additional checks would greatly strength this paper.
---
Rebuttal 2:
Title: Thank you for responding
Comment: Thank you for responding.
Re. RDesign
- As you noted, 2 other reviewers were concerned with the comparison to RDesign. Short of all options, we have been trying to use RDesign but we simply cannot reproduce its data format or run inference with their model, at present (eg. no installation instructions, dataset files seem corrupted, model does not have an actual sampling mode). **We hope you would agree that the community needs good open source training code, datasets, and reproducibility in order to further develop this new research direction (RNA design with deep learning).**
- Also, please see our ablation study for apples-to-apples comparisons to RDesign's architectural ideas vs. gRNAde's (majorly, non-autoregressive/one-shot vs autoregressive), benchmarked fairly on our split and experimental settings. We think this brings insights to readers, b/c the question of autoregressive vs. one-shot models is also relevant for other biomolecule design tasks.
Re. publication date:
- The decision notifications for ICLR were released in January privately to authors. We would imagine that the actual publication date is when a camera-ready version of a paper is **released to the public audience** beyond Authors, Reviewers, ACs, PCs (which was April).
Re. consistency of self-consistency metrics: That's a great point and we do agree that these metrics are not perfect. These metrics (as with most metrics for generative models) are supposed to be proxies for actually evaluating these designs in real world experiments. So maybe they are not perfect, but the way we actually use them for RNA design is as filters where really poor self-consistency means that the design is very likely a poor one (but good self-consistency does not automatically mean its a good design). We have actually discussed this and **caveated our results** at several instances in the paper.
Re. AlphaFold3's server:
- We are restricted to **20 jobs per day**, and it is **not possible to submit batched jobs**.
- Given these restrictions, and the fact that AF3 server came out after the NeurIPS submission deadline, would you agree that actually using AF3 for our evaluations is impossible?
- Here is a rough scale of how many times a structure predictor is called during our evaluation protocol:
- For example, take 100 test RNA backbones.
- Design 16 sequences for each backbone.
- Fold the 16 x100 designed sequences = 1600 calls per model evaluation (per one entry in the table in Appendix D)
- Assuming there are on average 5 authors in our team, evaluating one variant of our model would take 16 days via AF3 server. We have run experiments on ~25 models (not including further results we will be adding after rebuttals to other reviewers).
- Also note that there is no programmatic way to run AF3 server -- we would have been manually doing data entry.
Re. 2D structure predictors: Okay, we will now be rushing to try one of these out and will report back the results honestly. **Please stay tuned.**
---
Rebuttal Comment 2.1:
Title: Using RNAFold instead of EternaFold for 2D self-consistency
Comment: > for the 2D self-consistency scores, I think it really doesn't hurt for you to give RNAfold or LinearFold a try. I doubt it would generate any significant impact on the results paper, but having these additional checks would greatly strength this paper.
We just got results for this experiment: Replacing EternaFold with RNAFold as the 2D structure predictor lead to **unchanged results** and **did not modify the relative rankings of the models.**
Model parameters | EternaFold scMCC | RNAFold scMCC
---|---|---
AR, 1 state, Equivariant GNN | 0.5903 +- 0.0147 | 0.6051 +- 0.0185
NAR, 1 state, Equivariant GNN | 0.4337 +- 0.0324 | 0.4352 +- 0.0361
The results are for the single-state test split, and we reported results for 3 different models trained with identical random seeds (following the same protocols as the main paper).
We would like to add these results in the revised version of our paper, too.
We have worked hard to alleviate your concerns as much as possible. Please let us know how to proceed. | null | null | null | null | null | null |
Approaching Human-Level Forecasting with Language Models | Accept (poster) | Summary: This paper introduces a forecasting system based on Language Models that aims to achieve human-level forecasting capabilities. It presents a system that autonomously searches for relevant information, generates forecasts, and aggregates predictions. Through collecting a large dataset of questions from competitive forecasting platforms, the authors test the system's end-to-end performance. The results indicate that the system performs nearly on par with the crowd aggregate of competitive forecasters and surpasses it in certain scenarios.
Strengths: Strengths:
1. The paper introduces a novel and well-conceived retrieval-augmented LM system, effectively combining information retrieval, reasoning, and prediction aggregation to enhance forecasting accuracy.
2. The paper proposes a self-supervised fine-tuning method that leverages the model's own forecasts to generate training data, thereby improving the accuracy of predictions.
3. The result is based on a comprehensive dataset of questions from multiple forecasting platforms, enhancing its breadth and reliability.
Weaknesses: Weaknesses:
1. Compare to baselines, the system requires significant computational resources due to its summary and multi-sampling operations. Although the authors use some methods to save the cost, report token statistics and cost used by the system and baseline may be necessary.
2. The system prompt base model 3 times and fine-tuned models 3 times, however the baseline is 1 time. Whether this creates an unfair comparison, an obvious baseline might be to sample baseline 6 times and then vote or average.
3. Some description is confused, like in section 6.1, “our averaged Brier score is .179, while the crowd achieves .149”, I don't see the paper say which table the results come from.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for evaluating our work.
> Compare to baselines, the system requires significant computational resources due to its summary and multi-sampling operations. Although the authors use some methods to save the cost, report token statistics and cost used by the system and baseline may be necessary.
We agree that our system is more costly than the baseline method of naively prompting the base model once. However, we find that answering a single question with our system, even via the most expensive model like GPT 4, typically costs no more than 0.3 US dollar. This is a rather cheap method overall, especially in comparison with the alternative of hiring human expert forecasters, which usually require significantly more resources.
In addition, since the first submission of our paper, AI labs have continued to make progress in inference optimization. The GPT-4o-mini model, for example, costs over 10x less than the particular versions of GPT-4 that we used in the paper, while being only stronger on benchmarks. We expect that future developments would drive the costs even further, making our system extremely cheap to run.
> The system prompt base model 3 times and fine-tuned models 3 times, however the baseline is 1 time. Whether this creates an unfair comparison, an obvious baseline might be to sample baseline 6 times and then vote or average.
We implemented the baseline you recommended, utilizing 6 "gpt-4-turbo-2024-04-09" model calls, each with a temperature setting of 0.5. The results were as follows:
- Using the average for final votes: Score: .205 Standard error: .0104
- Using the trimmed mean for final votes: Score: .205 Standard error: .0108
(Our system’ score: .179 Standard error: .003)
This shows that the alternative baseline is not really better.
Thank you for this suggestion, and we will make sure to include this baseline in the final paper.
> Some description is confused, like in section 6.1, “our averaged Brier score is .179, while the crowd achieves .149”, I don't see the paper say which table the results come from.
They are from Table 3, page 5 (this table is the main result of our paper).
Please let us know if there are other concerns we can address! If not, we hope you can consider increasing your score. Thank you again for your review. | Summary: The authors contribute a novel system that approaches human-level forecasting performance. The authors also contribute a dataset of forecasting questions submitted to various human forecasting websites. The authors show that their system generally approaches human crowds. In some settings, where the LLM can selectively submit forecasts, they find that their system even outperforms humans. The authors conduct a series of ablations across their system, and highlight how each component contributes to the overarching forecasting ability.
Strengths: This is a well written paper that tackles an interesting problem for LLM based systems. The dataset it contributes is also quite useful. The ablations are careful, and the Appendix does a good job detailing aspects of the dataset, ablations, prompts, and other design decisions. Providing a system that can forecast events at a level that can rival humans and serve as a complement across decision making also has important practical applications (that the authors also carefully discuss).
Weaknesses: I would cite some crowdworking papers from other fields (e.g. HCI) just to highlight the effectiveness of crowd work in the related work section.
Minor: I would’ve liked more qualitative examples at each step of the system in the text: e.g. what the retrieved articles looked like, other questions, etc. instead of hunting through the appendix.
Closed models: I wonder how far Llama 2 / 3 could go, given the same finetuning setup. Given that the OpenAI models are closed, it would’ve been nice to see if we could push open source finetuning to achiever similar deltas.
How good are the humans on these markets, exactly? They seem public in nature. Is the average forecaster on Metacalculus an uninformed person? I think that is worth emphasizing in the paper—or at least qualifying some of the findings with a crowd of human amateurs.
Many of the weaknesses are formulated as questions in the question section.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Will the authors release the dataset if/when the paper is published? I think the collected dataset, even if it becomes stale soon, is very useful!
- A benefit of this system is that rationale---even if unfaithful---can be extracted from the LLM. Is there any kind of analysis on the underlying reasoning that might shed more light on the types of evidence an LLM finds "important" across forecasting tasks? In general, a more thorough error analysis of the final system would have been nice!
- In the selected prediction setting, are there certain subareas or domains where the LLM is more likely to submit a forecast?
- Also, is there an ablation with just finetuning and no IR? I’m curious to see how much the LLM system actually uses the retrieved docs.
- Purely for curiosity reasons: would the authors expect this to work on stock market predictions? Why or why not?
- I’m happy to raise my score a bit if some of these points are addressed!
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for evaluating our work.
> I would cite some crowdworking papers from other fields (e.g. HCI) just to highlight the effectiveness of crowd work in the related work section.
Thanks for pointing this out! We plan to cite the following review paper “Ghezzi, Antonio, Donata Gabelloni, Antonella Martini, and Angelo Natalicchio. "Crowdsourcing: a review and suggestions for future research." International Journal of management reviews 20, no. 2 (2018): 343-363.” in the related work section.
> Minor: I would’ve liked more qualitative examples at each step of the system in the text: e.g. what the retrieved articles looked like, other questions, etc. instead of hunting through the appendix.
To provide a better sense, the retrieved articles are exactly a subset of what you might find on Google News by searching “Candidate X winning chance”. And for questions on elections, the LLM typically suggests questions like “Who are candidate X’s competitors”, “What is the fundraising situation of candidate X” etc.
> Closed models: I wonder how far Llama 2 / 3 could go, given the same finetuning setup. Given that the OpenAI models are closed, it would’ve been nice to see if we could push open source finetuning to achiever similar deltas.
Llama 3 unfortunately was pre-trained on data up until mid 2024. Due to potential leakage, we cannot evaluate the Llama-3 series on our dataset, which has questions on events beginning June 1st, 2023.
We fine-tuned a Mixtral 8x7b model under the same setup; yet, it only obtained a Brier score of ~0.21, far from the same level of performance of GPT 3.5 or 4. We conjecture, though, that more recent open models like Llama 3 should be able to achieve the same level of forecasting ability.
> How good are the humans on these markets, exactly? They seem public in nature. Is the average forecaster on Metacalculus an uninformed person?
While individual forecaster performance varies on Metaculus, we compare our system to the Metaculus crowd prediction, which consistently beats even the top forecasters, making it a strong target to compete with (see https://www.metaculus.com/notebooks/15760/wisdom-of-the-crowd-vs-the-best-of-the-best-of-the-best/).
We had spent some amount of effort sourcing individual forecasters’ records across different platforms. However, most platforms do not release individual’s raw forecasts. In some cases, they release forecaster scores, but since the platforms use different and sometimes ambiguous scoring methods, we were not able to source the individual probabilities. We hope future work could address this issue in a different way, perhaps by recruiting human forecasters directly to compete with LLMs.
> Will the authors release the dataset if/when the paper is published? I think the collected dataset, even if it becomes stale soon, is very useful!
We have released the dataset on Hugging Face. We are not putting it here in respect of the anonymization policy of NeurIPS.
> Also, is there an ablation with just finetuning and no IR?
No, since our fine-tuning dataset consists of retrieved articles. As a consequence, the fine-tuned model naturally requires retrieved articles as part of their inputs. Our qualitative examples, however,
do show that our models significantly rely on the retrieved articles to reason and make predictions. See Appendix J for some cases.
> Purely for curiosity reasons: would the authors expect this to work on stock market predictions? Why or why not?
Our system can beat expert human forecasters, especially in the selective prediction settings. However, we have not made any targeted optimization for stock forecasts.
Please let us know if there are other concerns we can address! If not, we genuinely hope you can consider increasing the rating. Again, thank you for reviewing our paper! | Summary: The authors benchmark LLMs ability to perform on the task of forecasting, or predicting the outcome of future events. They test several methods and find that ensembling pretrained and fine-tuned LLMs which have access to news sources produces predictions similar to the accuracy of humans.
Strengths: Authors collect dataset and human baseline to benchmark task
Authors investigate many model designs and abrasions to understand which factors lead to high accuracy.
Authors present a model which is comparable to human accuracy on a challenging new task
Authors provide analysis of the difference between model and human predictions distributions.
Authors benchmark a number of different models.
Weaknesses: Dataset is relatively small, as there is limited data in existence for which is there a human baseline.
Technical Quality: 4
Clarity: 4
Questions for Authors: none
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for evaluating our work.
> Dataset is relatively small, as there is limited data in existence for which is there a human baseline.
We note that our dataset is the largest and most up-to-date available for automated forecasting. Compared to the latest work, which includes 3,833 binary questions (https://arxiv.org/abs/2206.15474), our dataset is 1.4 times larger, comprising a total of 5,516 binary questions. Additionally, for each of these 5,516 questions, there are crowd forecasts across multiple time stamps, resulting in a total of 1,118,154 forecasts.
In our work, we utilize up to 5 time stamps ("retrieval dates"), amounting to 22,064 forecasts, which is still significantly larger than any prior work. Moreover, we release this dataset of 5,516 forecasting questions and 1,118,154 forecasts, along with a larger dataset containing 33,664 questions and 4,044,325 forecasts (see Table 11 in Appendix C.2).
Please let us know if there are any other concerns we can address. If not, we hope you will consider increasing the rating. Thank you again for reviewing our paper.
---
Rebuttal 2:
Comment: I maintain my existing score. The work serves as a useful first benchmark for this area. It is technically solid, thorough and moderate impact on new use cases of LLMs. | Summary: The authors develop a forecasting system that uses news article retrieval and reasoning to predict future events. The system performs with near-human capability and is also complementary to humans. Thorough ablations and evaluations are done to identify that each component of the paper's method provides meaningful improvements to the prediction accuracies of the system.
Strengths: Originality: Tackles a highly impactful and important field of predictions. To my knowledge not many efforts as well-organized as this work have been made towards this.
Quality: Methods, evaluations, etc. are done extremely carefully including hyperparameter search, testing multiple dates of prediction, and fine-tuning multiple models.
Clarity: Writing is clear and easy to understand. No issues.
Significance: Accurate predictions are highly applicable to almost any macro-level problem. This paper and its results are very significant, and the improvement (though small) is better than humans which is already a big deal.
Weaknesses: Overall it is already very great. Kudos to you. A few suggestions to improve the paper:
1. More motivation in the introduction/general paper on why prediction is important: Other reviewers/readers may not understand the degree of importance prediction tasks have in fields such as social science.
2. More generally, I feel that the paper could benefit from having more focus on a story of why this matters, such as a short discussion section. In particular, discussion on scalable oversight/broader impact and how to manage models that have these capabilities would be appreciated.
(not necessary): Would be interested in seeing comparisons against human experts if there are any such datasets out there.
Technical Quality: 4
Clarity: 4
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: No Limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for evaluating our work.
> More motivation in the introduction/general paper on why prediction is important: Other reviewers/readers may not understand the degree of importance prediction tasks have in fields such as social science.
We discuss the broader impact in more detail in Appendix I, since we were a bit constrained by the page limit of the NeurIPS submission. Nevertheless, the camera-ready version allows for extra space, and we plan to add additional detail on the importance of prediction tasks and the broader implications of LLM forecasters.
Thank you for highlighting the significance of our work.
> Would be interested in seeing comparisons against human experts if there are any such datasets out there.
We had spent some amount of effort sourcing individual forecasters’ records across different platforms. However, most platforms do not release individual’s raw forecasts. We hope prediction platforms will make this data accessible in the future, or that this issue could be addressed through other means, such as recruiting human forecasters to compete directly with LLMs.
On the other hand, it is noteworthy that the community aggregate typically outperforms aggregates of the top 5, 10, ..., 30 best forecasters (based on past scores), making it a very strong benchmark for comparison. For more details, see this analysis: https://www.metaculus.com/notebooks/15760/wisdom-of-the-crowd-vs-the-best-of-the-best-of-the-best/.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. It would be great to see the contents of Appendix I incorporated into the main text in the final version.
As previously mentioned, a brief discussion on scalable oversight/broader impact and how to manage models that have these capabilities, maybe in the Appendix, would still be appreciated.
"it is noteworthy that the community aggregate typically outperforms aggregates of the top 5, 10, ..., 30 best forecasters (based on past scores)"
This is reassuring, and supports the paper well.
Happy to keep my current score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UniAR: A Unified model for predicting human Attention and Responses on visual content | Accept (poster) | Summary: In this paper, the authors introduce a novel text-image framework designed to integrate various human-response tasks and multiple image domains. These tasks include attention map generation, scanpath prediction, and subjective preference evaluation, applied to images such as webpages, natural scenes, graphic designs, and mobile user interfaces. The framework utilizes a transformer architecture, accepting both text and image inputs. Text inputs specify the tasks, while the model itself comprises a transformer encoder and three predictors (heatmap, rating, and scanpath) to generate 3 types of outputs. Initially pretrained on foundational text-image tasks, the model is fine-tuned for diverse human-response tasks
Strengths: 1. The question of unifying human response-related vision models is novel and holds significant potential, particularly if the advantages of integrating tasks are thoroughly explored.
2. The paper is well-organized and easy to understand.
Weaknesses: 1. I have many concerns on the experimental results of this paper, where the authors try to demonstrate the superiority of their model performance by comparing with other methods. My concerns are as follows.
a) The metrics in Table 3 are not a standard set of metrics for saliency prediction. Why don’t the authors report the complete of evaluation metrics in Table 3? It seems strange to drop the popular metrics such as sAUC and SIM in Table 3. It is a standard practice to report all standard metrics, ie., those commonly used in saliency predictions, as in the online benchmark (https://saliency.tuebingen.ai/evaluation.html). It seems odd to me the authors only report selected metrics in Table 3 but report all in Table 6 (which is in the appendix).
b) Following a), why do the authors put both Table 3 and Table 6? Both of the tables occupy the same space. The authors can just put Table 6 in the main manuscript instead of Table 3.
c) Why do the comparison methods vary greatly across different testing datasets? It will be helpful if the authors stick to several methods and compare their performance across the same testing datasets. Deep Gaze II and IIE are recognized for their strong performance on images of general scenes. Why did the authors choose to test these models only on the CAT2000 dataset and not on other general scene datasets such as Salicon and OSIE?
d) In the SalGAN paper [60], it reports the metrics on Salicon dataset, why the authors do not report this in Table 3 or Table 6 for SalGAN on Salicon dataset, but instead, only report its performance on the webpage dataset? If the authors have simply copied results from other works, incorporating these additional benchmarks would require minimal effort and should have been included. If the authors have tested the models, why not test them on the popular benchmarks such as Salicon and OSIE?
e) Can the authors explain why they don’t report SequenceScore for COCO-Search18? This column is denoted as "-" in Table 4.
f) Furthermore, the reported results on COCO-Search18 of the comparison method FFM[79] in its own paper [79] are different from what what authors report in Table 4, both under target present conditions. The SemSS scores in [79] under target present conditions are consistenly above 0.53, and the method of Chen et al. [16] is 0.572. Can authors provide more details on how they obtained the current results in Table 4 for the comparison methods?
g) Can authors explain why they didn’t compare scanpath prediction performance of ALOHA and baselines on COCO-FreeView in Table 4 even though they use COCO-FreeView in Sec. 4.4?
h) It will be helpful if the authors can experiment with more popular benchmark datasets such as MIT300 and MIT1003, and compare with the performances published on the online benchmark (https://saliency.tuebingen.ai/results.html).
i) Minor: some top performances are wrongly indicated, eg., on the webpage dataset (FiWI), the best performance on KLD should be Chen et al. [13].
In summary, excluding the SOTA models in common benchmark datasets significantly undermines the experimental evidence for ALOHA's competitiveness, especially considering that 7 out of the 11 datasets in the experimental section are for free-viewing saliency prediction (see Table 1). Based on all the above, I feel that current experiments cannot adequately supported the paper's claims.The paper needs more discussions and comparison results to support its claim that the model achieves SOTA performance.
2. Although the authors claim that the unified model could “act as a comprehensive reward model, predicting subjective ratings/preferences as rewards and providing additional insights from predicted human attention and behavior patterns,” the experiments do not clearly demonstrate the actual benefits of unifying the attention, rating, and scanpath tasks. The motiviation of this work is not well positionied.
3. Some technical details of the paper are not clear. E.g, a). the three predictors appear to operate independently, but it is unclear how they were chosen. b) Why text generation is chosen for scanpath generation. c) How is the model fine-tuned on various datasets.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How are the three predictors chosen for each task? Are they fixed assigned based on different input texts?
2. Are there any differences between the salience map and the importance map in terms of architecture or loss function, or do the differences lie solely in the training data?
3. In the "decoding during inference" section, the paper mentions, “If there is no fixation available in the predicted string, we mark the scanpath as invalid.” This invalid case (“no fixation”) only covers situations where all token pairs are non-numerical. If some tokens in the output sequence are in an unrecognizable format, rendering part of the results invalid, how should this be evaluated and handled?
4. Why is text generation used as a scanpath predictor?
5. How is the model fine-tuned on various datasets, which parts are freezed?
6. All my above questions regarding the numerical results (Table 3 and Table 6) in the Weakness section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The reported numerical results do not adequatly support the paper's claim.
More experiments to demonstrate the model's task transferring ability are needed. For example, transferring from saliency map to rating, or from scanpath to saliency map.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and we address each point below.
---
**Metrics in Table 3 and Table 6**
A: We include the important metrics and baseline methods in Table 3 so that it is more readable with larger font size. We have included reference to Table 6 in Table 3 caption for completeness. Some metrics are not chosen in Table 3, because the baseline papers did not report them, and hence caused empty columns in Table 6.
---
**Different comparison methods across different testing datasets**
A: For fair comparison, we refer to each baseline’s performance in their original papers and follow the same evaluation protocols / metrics, which can vary on different benchmarks.
---
**Why test Deep Gaze II / IIE only on CAT2000 and not on Salicon and OSIE?**
A: Deep Gaze II / IIE papers reported results on CAT2000 but not on Salicon validation set / OSIE. We did not re-implement any baseline due to the large number of datasets / tasks we needed to compare.
---
**Why not include SalGAN on the Salicon dataset in Table 3 or Table 6?**
A: There are two versions of Salicon data, Salicon 2017 (http://salicon.net/challenge-2017) and 2015 http://salicon.net/challenge-2015). All results in our table are on the newer Salicon 2017 dataset, in line with the baselines in Table 3, while SalGAN results were obtained on the Salicon 2015 dataset. The 2015 version differs in fixation ground truth, affecting metrics like NSS scores (higher for Salicon 2015 than 2017), so the results on Salicon 2015 are not comparable to Salicon 2017.
---
**Why no SequenceScore for COCO-Search18?**
A: The computation of the SequenceScore requires a standardized superpixel segmentation, which is not available after we communicated with the authors. To avoid inaccurate / unfair comparison, we did not include this metric on this dataset.
---
**FFM [79] results on COCO-Search18 different from the ones in Table 4?**
A: We refer to the results of FFM in Gazeformer [56] paper, which appeared in Table 6 in Appendix. There were some issues regarding the evaluation results on COCO-Search18 due to the misalignment of the ground-truth semantic segmentation map. The authors have updated their results in Appendix, which are the correct version that we cited in Table 4.
---
**Why not compare scanpath prediction performance of ALOHA and baselines on COCO-FreeView?**
A: We have the scanpath results on COCO-FreeView, but as a relatively new dataset, we did not find a good baseline for scanpath prediction on this dataset, so we did not include this dataset in our table. One relevant work is [1]. However, it didn’t report scanpath prediction performance on COCO-FreeView.
[1] Characterizing target-absent human attention. (CVPR'22)
---
**Additional experiments on MIT300 and MIT1003**
A: We have a model that did not include MIT1003 as training and directly tested on MIT300, and the results are suboptimal. However, since many methods on MIT300 benchmark dashboard are trained on MIT1003, so for a fair comparison, we will train a model on MIT1003 and report the results on MIT300.
---
**The experiments do not clearly demonstrate the actual benefits of unifying the attention, rating, and scanpath tasks**
A: The immediate benefit of an unified model is that only one model is needed instead of separate models for each task, leading to easier model serving with better performance. Furthermore, there are strong connections between attention and subjective experiences like aesthetic preference, subjective interpretation, emotional response, or ease of information finding (e.g., often focused attention means easy, while scattered attention means hard). To this end, we collect and will release a new dataset which contains both scores (easiness scores for question answering tasks) and gaze heatmaps (while performing question answering tasks) on digital images (designs, UIs, etc) , and show that the score and entropy of gaze heatmaps are correlated, to better motivate the unified model. Please see more details in Figure 3 and 4 in the rebuttal PDF.
---
**How are the three predictors chosen for each task?**
A: During training, we specify the task in the text prompt and compute gradients only using the loss of that task (e.g., scoring task if “OUTPUT: score” is in the prompt). During inference, the task info (i.e., “OUTPUT: score”) in the prompt will inform the model which task to perform and we retrieve results from the corresponding predictor.
---
**Differences between the salience map and the importance map?**
A: The two tasks use the same heatmap L2 loss and only differ in data. Note that the two tasks will have different text prompts, namely “OUTPUT: saliency heatmap” for saliency, and “OUTPUT: importance heatmap” for importance.
---
**Invalid output format for scanpath prediction?**
A: We first split the output sequence by the separator (“and” in our case). Then each component should contain two coordinates of x and y. If a <x, y> coordinate is invalid (e.g., only x but no y value), we skip that coordinate. All predicted scanpaths are valid after the models were finetuned.
---
**Why text generation as a scanpath predictor?**
A: Text decoder can generate arbitrary length of tokens (up to max length) and is suitable for scanpath which also has variable length. As the vision-language model is pretrained on coordinate prediction task the decoder learned the notion of coordinate token and performed well for predicting scanpath, which mainly consists of coordinates. As a unified model, we hope the model can generalize to different kinds of sequences, which can be predicted by the text decoder.
---
**How is the model fine-tuned on various datasets, which parts are freezed?**
A: All model weights are finetuned and no parts are frozen. We randomly sampled batches from all the datasets during training (more details in Section 4.2).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the rebuttal. However, I remain unconvinced by the explanations provided. As a researcher in the saliency field, I consider sAUC and SIM to be essential metrics for evaluating saliency prediction, yet they are notably absent from Table 3. This omission challenges the authors' assertion that Table 3 includes "the important metrics."
Furthermore, in a scientific research paper, it is crucial to include key metrics and provide comprehensive evaluation, which should take precedence over concerns about font size. Therefore, I find the decision to place detailed information in Table 6 in the supplementary material, while presenting only partial and duplicate information in Table 3 in the main paper, to be unjustified, especially when both tables occupy the same length. The authors' reasoning to make Table 3 "more readable with a larger font size" seems insufficient and unconvincing.
Similarly, the explanation that the authors "did not find a good baseline" does not sufficiently justify the exclusion of scanpath results for COCO-FreeView, particularly given that COCO-FreeView is used in Section 4.4. Moreover, the authors did not adequately address the question regarding the SemSS scores in [79], where scores under target-present conditions are consistently above 0.53, and the method of Chen et al. [16] achieves 0.572.
The examples provided to demonstrate the benefits of unifying attention, rating, and scanpath tasks are rather weak and not sufficiently convincing. First, the examples involve only attention and rating, without scanpath, which undermines the claim of presenting a truly unifying example. Second, the observed correlation between score and heatmap entropy is expected, as it is natural for more challenging tasks to result in more complex browsing patterns. Additionally, the correlation observed is weak by statistical standards.
In view of the above, I would like to keep my rating as reject.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your prompt reply, we really appreciate it! We would love to have another round of response for some clarifications, and hope those help!
---
**Table 3 & Table 6**
We agree that there are some duplications between these two tables. Our intention in including Table 3 was primarily for presentational purposes.
We are happy to replace Table 3 with Table 6 in the main paper to include metrics sAUC and SIM, or the full set of eight metrics.
---
**Scanpath results for COCO-FreeView**
As one of the dataset we used for the experiment, we have the results from our model on COCO-FreeView. We would welcome the opportunity to compare our results with established baselines. If you could kindly direct us to a publicly available benchmark for scanpaths on the COCO-FreeView dataset, we would be glad to include a comparison with these baselines in our paper.
---
**SemSS score from FFM [79]**
Regarding the discrepancy in results, we have taken steps to clarify the issue.
We contacted the first author of FFM [79] directly to discuss the mismatch. The author confirmed that the results of this experiment in FFM [79] are indeed problematic due to incorrect labels. With the author's permission, we are willing to share their response for your reference, which validates our approach of referring to the updated results in Gazeformer [56].
> The SemSS of FFM in the two papers (GazeFormer and FFM) are different, because there was a mistake in generating the semantic labels in the FFM paper. Labels got mismatched in image size, which leads to systematic shift in SemSS for all methods (namely, all methods got higher SemSS scores, but the order maintains). We should refer to Table 6 (b) in GazeFormer as the more reliable SemSS scores for GazeFormer, FFM, and Chen et al.
Hope this clarifies the question.
---
**Unifying attention, rating, and scanpath**
1. Scanpath is naturally correlated with attention map: an attention map is aggregated from scanpaths from multiple users. In this way, rating, attention, and scanpath could be unified via a prediction model. We do have scanpath data, however, we will be cautious about releasing the scanpath data, due to privacy concerns.
2. We are happy that you agree the observed correlation between score and heatmap entropy is expected. This verified evidence further motivates the unified model, which is the foundation for studying the correlation between the heatmap and score for better performance. There are many other cases where subjective ratings/experiences and human attention (heatmap) are related, e.g., “Salient-Centeredness and Saliency Size in Computational Aesthetics”, ACM Transactions on Applied Perception 2023; “Aesthetic Attention”, https://philpapers.org/archive/NANAA-2.pdf. So a unified model to predict subjective ratings/experiences and human attention/scanpath will help advance the research in this direction.
3. We observed a p-value around 7e-7 for the correlation between easiness score and heatmap entropy, indicating the correlation between easiness score and gaze heatmap entropy is statistically significant.
---
We hope our clarification and the follow-up response will help us better understand our work. A reconsideration of our work is highly appreciated. | Summary: Noticing the existing issues in human behavior modeling, such as isolating the study of implicit, early-stage perceptual behavior (like human attention) from explicit, later-stage behavior (like subjective preferences), specified visual content type; in this manuscript, the author(s) aimed to build an integrated human attention and preference behavior model for addressing multiple visual content types. Through empirical experiments, the author(s) demonstrated the effectiveness of the proposed model.
Strengths: In my opinion, the strengths of this manuscript are as follows:
1. Designed a unified approach to model human visual behavior: image+text=>human perceptual behaviors.
2. Extensive experiments are performed to validate the effectiveness of the proposed model.
Weaknesses: In my opinion, the weaknesses of this manuscript are as follows:
1. This manuscript seems not to discuss the model's limitations in real-world situations, such as how its performance is in dynamic environments or its adaptability to changes in user behavior over time.
2. As the author(s) described, ALOHA has 848 million parameters. Training such a large model requires significant computational resources, which may not be accessible to most researchers or practitioners.
Technical Quality: 4
Clarity: 4
Questions for Authors: I read the manuscript, and I have the following questions/comments. Thanks.
1. Are there any potential applications of ALOHA in real-world scenarios or in dynamic environments? How can it be used to optimize elements, such as user interfaces, graphic designs, and content creation?
2. Are there any potential biases in the current model's predictions?
Tiny format issues in References:
(1) Sometimes, the journal/conference name used an abbreviation; sometimes, not, sometimes, both, such as Ref.[25], Ref.[67].
(2) Keep the format of the references consistent, such as Ref. [6] vs. Ref.[15].
Please check carefully and correct the issues.
Overall, ALOHA represents a significant advancement in modeling human visual behavior, offering a unified approach that spans from early-stage perceptual responses to later-stage decision-making. I think this is an interesting manuscript.
I look forward to hearing from the author(s). Thanks.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and we address each point below.
---
**Model's limitations in real-world situations, such as how its performance is in dynamic environments or its adaptability to changes in user behavior over time.**
A: Current model does not consider dynamic environments or changes in user behavior over time, which is indeed a limitation. We will add this limitation to the discussion section. Please also see our response to the question of “Stay up to date” from reviewer FgZM.
---
**ALOHA has 848 million parameters. Training such a large model requires significant computational resources, which may not be accessible to most researchers or practitioners.**
A: We acknowledge the resource requirement might be difficult for researchers without access to enough GPUs / TPUs. There are techniques to reduce model training cost, like LoRA, or Parameter-Efficient Fine-tuning (https://huggingface.co/blog/peft), which could be employed with much less resources. However, using such techniques would be beyond the scope of this work. We will include this info in the paper and develop models with lower resource requirements in our future work.
---
**How can it be used to optimize elements, such as user interfaces, graphic designs, and content creation?**
A: ALOHA can help optimize content in several ways. For example, the predicted saliency heatmap can be used to remove the distracting areas in visual content, similar to “Deep Saliency Prior for Reducing Visual Distraction”: https://arxiv.org/abs/2109.01980. If the content is generated by a generative model, ALOHA’s predicted score can be used as a reward score to improve the generative model, with learning from human feedback/preference method, such as DPOK (https://arxiv.org/pdf/2305.16381) and DRaFT (https://arxiv.org/abs/2309.17400).
---
**Are there any potential biases in the current model's predictions?**
A: Please see the global rebuttal.
---
**Format issues**
A: We will make the edits according to the feedback. | Summary: This paper introduces ALOHA, a multimodal model that predicts human saliency, scanpath, and subjective rating of natural images, webpages, and graphic designs. ALOHA outperforms or performs similarly to baseline models across each of its prediction tasks while improving generalizability over task-specific models.
Strengths: *Generalizable model of human visual behavior and preferences.* The ALOHA model predicts multiple forms of human preference --- the saliency, scanpath, and rating --- of various types of image inputs, including web pages, natural images, and cartoons. This improves the generalizability of ALHOA over prior models that focus on a single prediction task or data input modality.
*In-depth analysis of model performance compared to prior benchmarks.* The paper provides a thorough evaluation of ALOHA's task-performance across 13 metrics, 11 datasets, and 4 tasks. ALOHA outperforms or performs on par with pre-existing models across all evaluations.
*Easy to read paper.* The paper is well motivated and clearly explained. Figure 1 is a particularly helpful overview.
Weaknesses: My biggest concern is the paper’s lack of engagement with the limitations and ethical considerations of modeling subjective human preferences. Currently there is a brief discussion of training data limitations in the supplement. I would like to see a thorough discussion of considerations included (or at least referenced) in the main text. Possible discussion points that come to mind include:
* What are the ethical considerations of replacing humans with models trained to replicate their preferences? Humans have diverse and often opposing preferences, particularly for subjective notions of aesthetics or attention. It is possible that models trained to replicate human preferences could learn a more uniform or singular notion of preference. Could using these models limit the visual diversity in webpages and content creation? Or could these models actually help us generate more visually diverse content?
* Whose preferences are included in the model and, more importantly, whose aren’t? Humans likely have diverse preferences when it comes to subjective notions like website attractiveness. I would suspect these preferences vary based on age, experience with technology, culture, etc. Building a model to replicate human perspectives may amplify a particular worldview that is not representative of all users. On the other hand, it may be easier to create a model trained on diverse human data than to get a diverse set of human feedback for every new webpage design.
* The paper suggests using the model as a reward function to optimize content creation. Could this result in negative consequences, such as content that is optimized for a model’s preference function and may not be value aligned with what humans want?
* How could human preference models, like ALOHA, be misused? For example, it seems like these models could be used to make ads more intrusive by optimizing their scanpath location or to make phising sites more convincing by optimizing the saliency of the credential input.
* How should human preference models take into account blind or low-vision users? Scanpaths and saliency likely differ between users who rely on screenreaders and those who don’t. Optimizing the web for sighted preferences could further exclude blind and low-vision from online spaces. Given the [emphasis](https://www.ada.gov/resources/web-guidance/) on web accessibility, how can we use human preference models to amplify (not weaken) online accessibility?
* How should models stay up to date with the changing human preferences? Human preferences, especially around aesthetics, are constantly changing. As a result, what are the implications for interpreting the evaluations used in this paper that span datasets from 2014–2023?
The paper would also be strengthened by an audit of their model’s behavior beyond quantitative performance metrics, such as an analysis of the diversity of the humans included in the training dataset, a categorization of the types of mistakes the model makes, and a [Model Card](https://arxiv.org/abs/1810.03993) reflecting its design.
I recognize that the paper is inheriting these ethical concerns from the preexisting datasets and tasks that it works on. I hope that by including a discussion of the models implications, the paper will progress research into these types of models and inspire work on new datasets, model analysis, and safe deployment techniques.
I might also suggest that the authors should select a new name for their model. I am not an expert in Hawaiian culture, but I know that the term ‘aloha’ is culturally significant, and there have been [Hawaiian movements to reclaim it](https://www.usatoday.com/story/life/health-wellness/2023/01/13/stop-saying-aloha-out-of-context/10990192002/), such as the [‘Aloha Not For Sale’ protests](https://kawaiola.news/cover/aloha-not-for-sale-cultural-in-appropriation/). Using it out of context as a model name, may minimize the rich cultural meaning behind the term, and given it is an imperfect acronym anyway, should be an easy change.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses section above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the Weaknesses section above.
Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate you bringing the ethical concerns to our attention. We recognize that such concerns are common with machine learning models, particularly those involving user preference and behavior modeling. We are committed to addressing these issues to the best of our ability by expanding the ethics and limitations section.
---
**Uniform vs diverse human preferences**
We acknowledge that models carry the risk of converging towards a more uniform notion of preference, a concern shared by all machine learning models. To promote visual diversity in our model, we propose,
1. Hybrid Approach: Initially, the model should be used in a hybrid manner, providing insights without replacing human decisions in web optimization, and
2. Personalized Models: Develop personalized models based on our initial unified model. This approach will help generate more diverse predictions based on user attributes.
---
**Demographics of annotators**
Please see our response in the global comment.
---
**Aligning with human preference**
While our model should be approximately aligned with human preferences, we recognize that using it as a reward model may lead to reward hacking. We plan to incorporate techniques [1, 2] for mitigating this when we use it as a reward mode.
[1] Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation
[2] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
---
**Properly using ALOHA model**
Like many technologies, human gaze prediction models could potentially be misused. Without regulations, such as unrestricted access to users without credentials and irresponsible data collection, the risk can be increased. To ensure the proper usage of our model, we propose,
1. Placing restrictions on model access to prevent unauthorized applications or users;
2. Adhering to strict ethical guidelines on data collection, and actively monitoring the collection process at scale; and
3. Being transparent about our model's capabilities and limitations. We believe that keeping humans in the loop and only using the model prediction as a reference but not replacing humans is important.
Internally, we strive to follow the ethical guidelines of our institution, ensuring that the model usage remains controlled and responsible. This has also caused us to be cautious about making our model publicly available.
---
**Blind and low-vision users**
Our model can benefit blind and low-vision users for screen users by highlighting the most important areas of a webpage via heatmap predictions and making them more accessible via screen readers. We recognize that the current training data may have limitations in representing the full spectrum of user experiences. We plan to enhance the inclusiveness by,
1. Multi-Modal Preference Modeling: We're developing our model to incorporate not just visual cues, but also how users interact with content through screen readers, voice commands, and other assistive technologies;
2. Collaboration with Accessibility Experts: We plan to collaborate with accessibility experts and organizations representing blind and low-vision users for future iterations of our work.
---
**Stay up to date**
Staying up to date is essential for models, and can be achieved by fine-tuning with more recent data. Continual learning techniques also support this idea [1]. However, updating the training data is out of the scope of our paper, as we want to focus on developing the first unified model on the diverse user modeling tasks.
[1] A comprehensive survey of continual learning: theory, method and application
---
**Error Analysis**
We analyzed 50 error examples in the Koniq-10K dev set and found some interesting patterns, see examples in Figure 1 in our rebuttal PDF. The first example shows the most common error category when our model predicts a higher score for colorful but blurry images. The second example shows another common model error where a low-light image gets a higher prediction score. For the last black-and-white example, on the contrary, our model predicts a lower score than the groundtruth. These errors demonstrate that human preferences for image aesthetics can depend on diverse factors including clear focus, correct lighting, and artistic style.
Similarly, for heatmap prediction, we analyzed the 30 examples with the lowest NSS scores on the Salicon validation set. The example in Figure 2 in the rebuttal PDF demonstrates the most common error category where the groundtruth contains a more scattered gaze heatmap. In such cases, our model prediction might not focus on the right objects.
We will conduct more error analysis on other model predictions and include this info in the paper.
---
**New content to include**
Happy to incorporate your recommendations and include a discussion on the following action items:
1. The diversity / demographics of annotators
2. Model failure cases and discussions
3. A model card
---
**Model’s name**
We are happy to choose a new name from alternative names for our model, for example,
1. GLAM: From Glances to Likes: A Unified Model for Understanding Human Visual Behavior
2. TOTORO: from attention TO likes TO human RespOnses – a unified model of human visual behavior
3. HABITAT: modeling Human Attention and Behavioral InTeractions Across diverse visual content using a multimodal Transformer
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to think deeply about the implications of this research! I appreciate your diligence in finding demographic information and doing error analysis, your thoughtfulness in responding to each of my concerns, and your commitment to include a model card and these considerations in the paper.
In addition to what you have listed in "New content to include", I would suggest:
* Beyond discussion limitations of the demographic information that exists, I suggest also discussing limitations in the demographic information that is not reported. From your demographic analysis, we only know the annotators age and gender. However, we do not know their location, race, country of origin, education level, etc. --- all of which contribute to someone's worldview and could change the way they interact with a webpage.
* I would like to your discussion section include all of the limitations you have mentioned in this rebuttal --- a uniform model for diverse human preferences, reward hacking, safety measures for human preference models, making these models accessibility friendly, etc --- not just the demographic info, error analysis, and model card.
I appreciate your suggestions for new names. I think all the ones you suggested are great, so I will leave the name decision up to you.
Assuming these changes are reflected in the final manuscript, I am happy to accept the paper and have increased my score accordingly.
---
Reply to Comment 1.1.1:
Title: Reply
Comment: Thank you for helping us improve our paper and increasing your rating!
> discussing limitations in the demographic information that is not reported
That's a very good idea and thanks for your kind reminder! We will make sure to involve this part in our final discussion on demographic information. By quoting all the available information we have, we will also discuss the missing information in demographics, which could be useful for future reference.
> include all of the limitations you have mentioned in this rebuttal
For sure, we will add these thoughts in our rebuttal to the final version of the paper with a clear structure.
We appreciate your effort in reviewing this paper and we really enjoy talking to you. | Summary: In their paper "ALOHA: from Attention to Likes – a unified mOdel for understanding HumAn responses to diverse visual content" the authors describe a new unified model to predict human saliency(attention/importance), even more fine grained than that, scanpath, and ratings.
After nicely introduced the motivation and an extensive outlook on related work, the authors describe the multimodal VLM-based encoder-decoder transformer architecture in detail, especially the three predictor head. Exploting the power of instruction tuned LLM, the authors introduce additional tokens to predict valid scanpath at inference time.
The benefits of their model are evlauted based on many experiments, which are shortly discussed, followed by a short conclusion.
Strengths: The main strength of the paper are the experiments, which are extensive across several benchmarks and metrics.
Integrating a three headed VLM for the tasks is intuitive and elegant.
Their proposed ALOHA architecture is SOTA in sevaral settings (22/35)
The writing is mostly easy to follow and there is a clear picture the authors want to paint.
Weaknesses: - The writing can be improved in several places, I'll note here a few:
1. Introduction:
- In the introduction, it seems that the authors already have the conclusion at the end, switching from present to past tense for the main contributions of the paper. This makes it sounds as if the authors already published ALOHA previously.
- In the same two points of the main contributions, the authors actually only have one contribution, their model ALOHA. The second "contribution" is only the evaluation of ALOHA.
- regarding the previous comment -- the second contribution, even though from my point of view it is no contribution is the evaluation of ALOHA and not the training of ALOHA.
3. Unifying Human Visual Behavior Modeling from Attention to Likes:
- The overall optimization criterion should be part of the main paper, not in the appendix.
- Model training (section 4) from my pov should be part of section 3, it is not part of the experiments since you do not report multiple training strategies.
Its unclear how hyperparameters were tuned.
Code will not be made public, which is a major concern in the current reproducibility crisis in ML. Additionally the question in the questionnaire is wrongfully answered as N/A; the paper DOES INCLUDE experiments requiring code. (GUIDELINE: The answer NA means that paper does not include experiments requiring code.)
Technical Quality: 3
Clarity: 3
Questions for Authors: You mention that you pad images to 512x512 -- what happens with images larger than 512x512?
How did you arrive at your optimization criterion, scaling for the learning rates and in general all hyperparameters? You did not report any hyperparameter tuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and we address each point below.
---
**The writing can be improved in several places**
A: We will edit the paper according to the suggestions.
---
**How hyperparameters are tuned**
A: We did a training-validation split on our larger datasets and then tuned the hyperparameters (e.g., learning rate, batch size, dropout rate, loss weights of the 3 heads) on the validation set through grid search. The model was then trained with the full training data with the chosen hyperparameters.
---
**What happens with images larger than 512x512?**
A: We resize the images to have a max height or width of 512 and center-pad the smaller dimension to 512.
---
**Code will not be made public, which is a major concern in the current reproducibility crisis in ML**
A: Given the sensitivity of this topic, and our concern that the code may be misused if made public (see comments from Reviewer FgZM and our rebuttal on "Properly using ALOHA model"), our institution has a strict policy on opening source code in this area. But to help advance the research, we will provide enough details (including all the info in the rebuttal), all necessary communication and support for o researchers in this area to reproduce our work, who we believe will make use of this technique appropriately. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough and constructive feedback. We have addressed each point in the individual responses. We have included some of the common points of discussion as below. Moreover, in the rebuttal pdf, we also included some figures to answer the questions of “Error Analysis” from reviewer FgZM, and “the actual benefits of unifying the attention, rating, and scanpath tasks” from reviewer Fbj6. We will also fix all the formatting issues pointed out by the feedback.
---
**Demographics of annotators [Reviewers FgZM, ew4]**
We check the training data collection processes and include the participant demographics that we have found.
WS-Saliency: “A total of 41 participants (19 females, 22 males; age range 17-23; with normal or corrected-to-normal vision) participated in our data collection.”
Mobile UI: “Thirty participants (12 male, 18 female). [...] The average age was 25.9 (SD=3.95). The participants had normal vision (8) or corrected-to-normal-vision (22). Twenty of the 22 wore glasses and the remaining two wore contact lenses.”
Imp1k: “The data of 43 participants (29 male, most in their 20s and 30s) were used in the resulting analyses.”
FiWI: “11 students (4 males and 7 females) in the age range of 21 to 25 participated in data collection. All participants had normal vision or corrective visual apparatus.”
Based on the available descriptions, we found that participants show good convergence on gender. However, the age distribution is somewhat skewed, likely because most collections were conducted at universities. Datasets like Salicon and Koniq-10K, which utilize crowdsourcing workers for annotations, are expected to have a better balance in terms of age and other attributes. We will add this information to the limitations discussion.
---
**Bias in the data sets or the model [Reviewer ew4]**
The model is trained with multiple public data sets, each of which might have some bias. So it is possible for our model to have also learned such bias from those datasets. Since this paper introduces the first model to unify all the human visual behavior tasks, we focus on getting a model to work for these tasks with good accuracy, and we plan to implement techniques to evaluate and mitigate bias in future iterations of our work. We will however add this discussion in our limitation section along with demographics information discussed above.
---
We look forward to the discussion phase and further improving the paper.
Pdf: /pdf/e500261bf61f2a0d1d0a7cded6b163da27bb5069.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling | Accept (poster) | Summary: This paper present a new method for text to motion generation. In this method the human motion is represented as 2D tokens in a codebook. This allow the authors to apply 2D operation on 3D motions and use a 2D masking strategy. The architecture is composed of a VAE to learn the codebook and of a Transformer to learn the relation between CLIP embedding of the text input and the corresponding codebook tokens using spatial-temporal attention. With this method the authors outperforms state of the art approaches quantitatively and qualitatively. An ablation study shows the effect of each component.
Strengths: The paper is clear and detailed.
The mixed use of codebook, masking ans spatial-temporal transformer is interesting.
The method outperforms the state of the art quantitatively and qualitatively.
The ablation shows well the effect of each component.
Weaknesses: The Figure 3 b is not very clear It seems that the CLIP embedding is concatenated to the flattened motion embedding and then positional encoding is applied while the text say that the CLIP embedding is added after positional encoding. I also don't understand why positional encoding is added twice, one on the flattened vector and one time on the matrix.
An ablation to show that this concatenation of token and text is the better than for example cross attention would have been welcome. Another interesting ablation would have been to see the performance of the model when computing the temporal and spatial attention in parallel instead of sequentially.
It is not clear whether P is added only during attention or also on the inputs like base transformer. There is also no explanation as to why add positional encoding after computing the attention matrix.
On several metric the ground truth is beaten but the paper does not provide explanation fro this.
The paper does not describe how FID features are extracted ?
The classifier free guidance should be mentioned in the main paper not just in the appendix. It is an important component.
Regarding the motion editing figure : why is only one hand being raised with the Temporal-Spatial Editing while the temporal editing results shows both hands being raised. Since the plural is used in the prompt this would indicate that temporal editing is better.
Technical Quality: 3
Clarity: 3
Questions for Authors: It should be mentioned somewhere that j^i_t contains the 3 dimension of joint i.
A user study would have been nice. Metrics are difficult to use on these more complex actions.
It might be better to mention clip in the overview instead of waiting fro section 3.5.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are very briefly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and constructive suggestions! We hope our responses adequately address the following questions raised about our work. Please let us know if there is anything we can clarify further.
**1. Clarification of positional encoding.**
Sorry for the confusion caused. We will clarify this in the revision.
- The positional encoding is performed after adding the CLIP embedding. We will revise the manuscript in the revision.
The reason why positional encoding is added twice is that we want to reinforce the network's awareness of the spatial-temporal 2D structure. Adding positional encoding once should also work.
- The positional encoding P is added on the inputs like the base transformer.
- Adding positional encoding after computing the attention matrix is because we want to reinforce the network's awareness of the spatial-temporal 2D structure before the next attention computing.
**2. Ablation of cross attention of token and text and computing the temporal and spatial attention in parallel.**
Thanks for this constructive advice. We perform the ablation study as suggested and present the results in Table 2.
From the results, we can see that the cross attention of token and text or computing the temporal and spatial attention in parallel performs similarly to our original framework.
| **Setup** | **FID** | **Top1** |
|-------------------| :------------------:| :------------------:|
| Cross attention of token and text | 0.034 | 0.521 |
| Computing the temporal and spatial attention in parallel | 0.035 | 0.522 |
| Ours | 0.033 | 0.529 |
**3. Why ground truth is beaten.**
The ground truth is beaten on the metrics about the text-motion matching, so we think this is because the text label is not perfectly annotated.
We will add this explanation in the revision.
**4. FID features extract.**
Following previous methods, we use the same models proposed in T2M [5] to extract the FID features. We will add this description in the revision.
**5. Classifier free guidance.**
We will mention this in the main paper in the revision.
**6. Motion editing figure.**
Sorry for the misleading.
In Figure 5, the Temporal-Spatial Editing is just to show that we could edit only specific joints while keeping other joints fixed. This level of granular control is hard to realize in previous methods that encode all joints to one token.
**7. User study.**
Thanks for this constructive advice. For the user study, we select 10 texts and display the generated results of T2M-GPT, MoMask, and Ours. Due to the limited time, we only collect the study from 14 users, resulting in 140 total votes. Of these, 102 votes favor our results, 33 votes prefer MoMask, and 5 votes prefer T2M-GPT, as is shown in Figure 1 in the uploaded PDF file.
**8. Other questions.**
- We will mention that $j^i_t$ denotes a 3D rotation of joint i on time t in the revision.
- We will mention CLIP in the method overview in the revision.
- Limitations: The brevity is due to the limited space. We will try to include a more extensive discussion in the revision.
---
Rebuttal 2:
Title: Rating after rebuttal
Comment: The authors clarified the few thing that weren't clear to me. The user study is appreciated. I keep my original weak accept rating.
It seems that the values reported inside the user study graph are wrong (102,73%).
---
Rebuttal Comment 2.1:
Title: Thanks for the reviewer's feedback
Comment: Thanks for the reviewer's feedback. We are pleased to see that our response has provided clarification on some points.
Sorry for the confusion. When we state (102, 73%), we mean that there are 102 votes, which represent 73% of the total votes. We will revise this for clarity. | Summary: This paper proposes an approach for text-conditioned motion generation. A common practice in this area is to use a quantized representation of human motion obtained with a VQ-VAE. However, most prior works represent the full body by a single token, which makes accurate reconstruction complicated.
In this work, the authors propose a new way of quantizing human motion: a single token is associated with a single joint. Then, the motion can be represented as a 2D grid of indices corresponding to spatial and temporal dimensions. Using this new representation, this paper proposes to generate motions using masked generative modeling.
In summary, the contributions of this paper are:
- A new quantization of the human motion, representing each joint in a 2D map of tokens.
- A masking strategy allowing the leverage of spatiotemporal information preserved by the proposed quantization.
- A masked generative modeling strategy to generate new motions conditioned on text input.
Strengths: I would say that the main strength of this paper is not the novelty: masked generative modeling was already used for human motion generation [17]. However, this paper brings new components that make a lot of sense and seem to greatly impact the results. The bottleneck of prior works (1 pose = 1 token) is well-identified, and the proposed quantization strategy addresses this problem effectively by associating a token to each joint. In addition to improving the reconstruction after quantization, this representation proves useful as it preserves the spatiotemporal structure of the motion. Carefully designed operations (2D token masking, spatial-temporal motion Transformer) benefit from the proposed representation despite its higher dimension.
Another strength of this paper is the evaluation. The comparisons follow the standard procedures and seem totally fair to other methods. Providing confidence intervals by running experiments multiple times is an excellent practice. Even for the evaluation of the quantization in Table 2, I find it very good that the authors decreased the codebook size of the introduced model for fair comparison with other methods. In addition to the 2 datasets widely used for comparisons, the appendix provides results on numerous datasets, which is appreciated to evaluate the model's generalization capability. The ablations are also satisfying, as they allow to evaluate the impact of the main introduced components.
Weaknesses: The main weakness of this paper is that it is difficult to understand the quantization of human motion:
- L76 "each joint is quantized to an individual code of a VQ book": From my understanding, with the residual quantization, each joint is quantized to a sequence of indices; the final code is the sum of codes corresponding to those indices and associated codebooks.
- Equation 1: Given L159, it seems that one joint in the input is converted to one token. Equation 1 suggests the same (the input of the encoder would be of dimension 3). And then L272, "Both the encoder and decoder are constructed from 2 convolutional residual blocks with a downscale of 4," so I really do not understand at all. Is there a spatiotemporal reduction?
- Equation 2: This does not correspond to residual quantization. Maybe it is meant to simplify the understanding, but I find it very confusing.
Globally, it is very difficult to understand how the method works until we reach section 3.5.2. For instance, until then, I did not understand the notion of a 2D map since the residual quantization would have made the grid 3-dimensional. I also wondered how the masking could encompass the depth of the quantization.
Other minor issues include:
- L125: It would be better to mention methods that represent a single pose (or human) with multiple tokens [a,b].
- The presentation of Table 1 is not optimal. Giving the dataset in the table instead of the caption would be more clear (like in [17]). Also, why are there no bold results for diversity? It may look like this is because some other methods have better results.
[a] Geng, Z., Wang, C., Wei, Y., Liu, Z., Li, H., & Hu, H. (2023). Human pose as compositional tokens. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 660-671).
[b] Fiche, G., Leglaive, S., Alameda-Pineda, X., Agudo, A., & Moreno-Noguer, F. (2023). VQ-HPS: Human Pose and Shape Estimation in a Vector-Quantized Latent Space. arXiv preprint arXiv:2312.08291.
Technical Quality: 4
Clarity: 2
Questions for Authors: - How does the quantization work? I would like to understand if there is a spatial-temporal reduction, as the information in the paper seems contradictory.
- From my understanding in Section 3.3, there is no information about which joint is processed in the encoder (so the encoder processes in the same manner every joint). Can the VQ-VAE be considered a quantization of the 3D space as a learned grid?
- From Figure 2 and other explanations, the pose seems flattened to correspond to a column in the 2D map. How are the joints organized so that the spatial structure of the pose is preserved? Is there a topological ordering?
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The section about the limitations is very short. I think that it could be improved by proposing more solutions to the quantization problem and other research perspectives. Otherwise, it sounds like the problem of motion generation is now completely solved.
The authors say that this work has no societal impact at all. I would agree that the impact on society is very limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and constructive suggestions! We hope our responses adequately address the following questions raised about our work. Please let us know if there is anything we can clarify further.
**1. Clarification of the quantization.**
Sorry for the confusion caused. We will clarify this in the revision. In this paper, we do not intend to detail the residual quantization, since we simply follow established methods to employ this technique, as noted in L169. Thus for simplicity, we describe our method using only single-level quantization without incorporating a residual structure.
Therefore,
- L76: Here we only describe the single-level quantization without the residual structure.
- Spatiotemporal reduction: In equation 1 within the method section, we describe our methodology that does not include spatiotemporal reduction.
However, our experimental results indicate that appropriate spatiotemporal reduction can decrease computational load without significantly impacting performance.
So we perform a spatio-temporal reduction in the experiments and describe this in L272 in the implementation details.
- Equation 2: Yes, here we only describe the single-level quantization without the residual structure.
- Masking: The masking is only performed on the base-level. The prediction of the tokens in residual levels is based on the previous level without masking, following the previous method MoMask.
**2. Encoder processes in the same manner every joint.**
Yes, the encoder processes every joint in the same manner. The 2D joint map, with dimensions T×J×D (where T represents the sequence length, J denotes the number of joints, and D indicates the feature dimension), is processed by the 2D convolutional networks, similar to image processing techniques.
After the encoder, the quantization (either single-level or residual) begins to quantize the encoded vectors.
**3. Spatial structure.**
The order of the joints is fixed in both training and inference, so we hope the network can learn the spatial structure of the flattened joints in the training, with the 2D positional encoding.
**4. Minor issues and limitations.**
We will mention [a, b] in the revision.
Table 1: Giving the dataset in the caption is due to space constraints. We will attempt to revise this. No bold results for diversity is because it is hard to say which one is better for this metric.
Limitations: The brevity is due to the limited space. We will try to include a more extensive discussion on quantization in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for an insightful rebuttal addressing most of my concerns. I still lean towards accepting this paper.
I have a doubt about the **spatial structure**. I understand that the joints' ordering is the same at training and inference and that there is a positional encoding. My question was more about the order of the joints once flattened: does it follow an order that preserves the structure of the skeleton (for instance, left shoulder -> left elbow -> left wrist, ...), or is the spatial information exclusively in the positional encoding?
For the lack of space in the caption of Table 1 and the limitations, I would suggest moving the limitations to the annexes.
---
Rebuttal 2:
Comment: Thanks for the reviewer's feedback. We are pleased to see that our response has addressed most of the concerns.
Regarding the spatial structure, the flattened order follows the HumanML3D dataset and is as follows: 'pelvis', 'right_hip', 'left_hip', 'spine1', 'right_knee', 'left_knee', 'spine2', 'right_ankle', 'left_ankle', 'spine3', 'right_foot', 'left_foot', 'neck', 'right_collar', 'left_collar', 'head', 'right_shoulder', 'left_shoulder', 'right_elbow', 'left_elbow', 'right_wrist', 'left_wrist'. These joints are arranged in order of proximity to the pelvis joint, from nearest to furthest.
Sure, we will move the limitation to the annexes and discuss more. | Summary: This paper proposes a novel approach to human motion generation by quantizing each joint into individual vectors, rather than encoding the entire body pose into one code. The key contributions are: (1) It quantizes each joint separately to preserve spatial relationships and simplify the encoding process. Then the motion sequence are organized into a 2D token map, akin to 2D images, allowing the use of 2D operations like convolution, positional encoding, and attention mechanisms. (2) It introduces a spatial-temporal 2D joint VQVAE to encode motion sequences into discrete codes and employs a temporal-spatial 2D masking strategy and a spatial-temporal 2D transformer to predict masked tokens.
Strengths: 1. The paper introduces a novel joint-level quantization approach, addressing the complexity and spatial information loss issues seen in whole-body pose quantization.By organizing motion sequences into a 2D token map, the method takes advantage of powerful 2D image processing techniques, enhancing feature extraction and motion generation.
2. The integration of 2D joint VQVAE, temporal-spatial 2D masking, and spatial-temporal 2D attention forms a robust framework that effectively captures spatial-temporal dynamics in human motion.
3. Extensive experiments demonstrate the method's efficacy, outperforming previous state-of-the-art methods on key datasets.
4. The paper is well-written and easy to understand. The supp. mat. video provides comparisons with MoMask.
Weaknesses: 1. Although the overall idea of joint-level quantization is interesting, I still have the concern of computational overhead. While motion representation is typically lightweight, the use of 2D code maps and spatial-temporal attention can introduce significant computational overhead, similar to image data processing. It would be beneficial to compare the inference speed of mainstream methods (e.g., MoMask, T2M-GPT, MLD, etc) to show that the state-of-the-art performance is achieved with comparable computational costs.
2. The experiments are conducted on relatively small datasets (HumanML3D and KIT-ML). To better validate the effectiveness of the proposed method, experiments on larger-scale datasets, such as Motion-X, would be advantageous.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses. Overall, the idea is interesting. My major concern is the extra computational cost of this method, which could be much larger than previous methods with body-level VQ-VAE yet it is not investigated in the paper.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper has discusses the limitation of this paper: the approximation error in VQ-VAE and a larger dataset for training more accurate VQ-VAE.
I think there is no potential negative societal impact in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and constructive suggestions! We hope our responses adequately address the following questions raised about our work. Please let us know if there is anything we can clarify further.
**1. Computational overhead of the proposed method.**
Thanks for this constructive advice. We test the computational overhead of different methods on an NVIDIA 4090 GPU and report the average inference time per sentence in the Table 1.
Although our method does not achieve the shortest inference time, the increase in computational overhead is not significant.
Overall, the computational overhead of our method is comparable to that of mainstream methods.
| **Methods** | **Average Inference Time per Sentence** |
|-------------------| :------------------:|
| MLD | 134 ms |
| MotionDiffuse | 6327 ms |
| MDM | 10416 ms |
| T2M-GPT | 239 ms |
| MoMask | 73 ms |
| Ours | 181 ms |
**2. Experiments on Motion-X.**
Sorry for the lack of clarity. Actually, we have conducted the motion quantization experiments on the Motion-X dataset, as in Appendix A.3.1 or the below table. From the quantization results, our method also works well in the larger dataset, Motion-X.
Due to the fact that there is no official text-motion-feature model, it is hard to evaluate the text-to-motion generation performance of our method. We have trained our own text-motion-feature model following HumanML3D, but it does not work well. We will clarify this in the revision, and also continue our attempts to evaluate our method on the Motion-X dataset.
| **Methods** | **MPJPE** | **FID** | **Top1** | **Top2** | **Top3** | **MM-Dist** | **Diversity** |
|-------------------| :------------------:| :------------------:| :------------------:| :------------------:| :------------------:| :------------------:| :------------------:|
| Ground Truth | - | - | 0.420 | 0.631 | 0.754 | 2.800 | 10.100
| Momask | 111 | 0.081 | 0.396 | 0.604 | 0.725 | 2.955 | 9.837
| Ours | 48.7 | 0.011 | 0.417 | 0.627 | 0.750 | 2.832 | 10.113
---
Rebuttal Comment 1.1:
Comment: After I carefully read other reviews and the author rebuttal, I think this paper proposes an effective and efficient method for motion generation. My initial concerns about the efficiency and the results on Motion-X has also been resolved in the author rebuttal. I will keep my original rating and leaning to accept this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for the reviewer's feedback
Comment: Thanks for the reviewer's feedback. We are pleased to see that our response has addressed the concerns. If there are no further concerns, please also consider raising the rating. Many thanks! | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all the reviewers for their time and their valuable feedback. We deeply appreciate their recognition of our work, such as
**Reviewer g7ot:**
"a novel approach",
"a robust framework that effectively captures spatial-temporal dynamics",
"Extensive experiments demonstrate the method's efficacy, outperforming previous state-of-the-art methods".
**Reviewer SBXK:**
"this paper brings new components that make a lot of sense and seem to greatly impact the results"
**Reviewer 1rhT:**
"The mixed use of codebook, masking and spatial-temporal transformer is interesting",
"The method outperforms the state of the art quantitatively and qualitatively"
In the following, we address each reviewer’s comments point by point, and attach some figures in the uploaded PDF file.
We hope our responses adequately address the questions raised about our work. Please let us know if there is anything else we can clarify further.
Pdf: /pdf/7ccbc7bdefb41bc37df224c86a3f34dbd9750e45.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models. | Accept (poster) | Summary: In this paper the authors propose a number of engineering tricks which enable generating at higher resolutions from a pre-trained txt2img diffusion model.
Notably, the requirements for the proposed method are relatively low.
In the proposed approach, an image of a standard resolution is generated first.
After that, is it upsampled (in RGB space) to the desired size and used as a guidance for the new, truly high-resolution image which is generated as a sequence of overlapping patches.
The guidance mechanism is implemented by mixing the imaginary component of Fourier decomposition of the currently denoised and guidance patches.
This procedure takes place not for all the denoising steps, but only up to certain noise level referred to as Slider.
The value of Slider controls how similar are the original standard-resolution image and its higher-resolution version.
Performance of the method is evaluated with commonly used metrics such as FID, KID, IS and CLIP-similarity.
Also, a lot of samples are provided for visual inspection.
Strengths: According to the provided metrics and visual results, the methods performs quite well in comparison with recent baselines.
The fact that it requires only 7.4 GB of memory makes it very affordable for the community.
The idea of mixing in the spectral domain, to the best of my knowledge, is novel enough in context of ultra-high resolution sampling from a pre-trained model.
Exploiting the checkerboard pattern for mixing with guidance is also an interesting engineering trick.
Weaknesses: 1. While the authors several times mention that "the imaginary part in the frequency space contains most of the low-frequency information of the image" (line 187) and "imaginary part provides more structural information than the real part" (line 249), they do not provide any references for this statement or supporting experiments. To me, this fact does not look evident enough. I am sure this needs more thorough explanation since otherwise it looks just like an empirical trick.
1. From the text, it is unclear which Slider value was used to obtain the numbers from Table 2. Was it the value of 30, as defined in line 244? Also, what exactly is "using next inference step (NIS)" (see line 253)? Is this about checkerboard mask mixing?
1. The presentation of Fig. 3 is not easy for understanding, I advise considering its redesign. For example, forward diffusion and taking imaginary part of FFT are denoted in the same way.
1. It is unclear from the text what the guidelines are for selecting the size for the zone of averaging for overlapping patches (line 805).
Technical Quality: 3
Clarity: 3
Questions for Authors: I ask the authors to address the weaknesses listed above during the rebuttal period. In particular, I am interested in the deeper justification of using imaginary Fourier coefficients than empirical evidence.
POST-REBUTTAL UPDATE: I keep my initial score. I think that although the method is very simple, this does not mean that the submission is bad. I believe that with improved presentation this paper can be interesting for the practitioners.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No actions needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive criticism. We welcome any comments that can help improve the quality of our work.
>W1. While the authors several times mention that "the imaginary part in the frequency space contains most of the low-frequency information of the image" (line 187) and "imaginary part provides more structural information than the real part" (line 249), they do not provide any references for this statement or supporting experiments. To me, this fact does not look evident enough. I am sure this needs more thorough explanation since otherwise it looks just like an empirical trick.
Please refer to the global response.
>W2. From the text, it is unclear which Slider value was used to obtain the numbers from Table 2. Was it the value of 30, as defined in line 244?
Yes, the position of the Slider was 30. This is also indicated by the metrics in Table 1 and Table 2 which are identical. Nevertheless, we understand that it should be specifically mentioned in the Comparison section, as such Line 264 is expanded as follows "... refer to Appendix B. The Slider position is set to 30.".
We note here that the position is not cherry picked but chosen randomly. Different Slider positions will yield different results (as shown in Table 1), and there may be other positions that could potentially perform even better.
>W1. Also, what exactly is "using next inference step (NIS)" (see line 253)? Is this about checkerboard mask mixing?
Yes, this is correct. The "next inference step" mentioned in the Ablation study refers to the use of the chess-like mask. As noted in line 253, "... in Appendix G, incorporating the next inference ...". Appendix G provides further explanation of this process. We recognize that this may have been confusing, so the issue has been addressed as follows:
* The line 241 is changed to "... information from the next inference step using masking and the impact ..."
* The paragraph starting with line 252 is changed to " **Masking (column No Mask):** Analysis in column No Mask shows that not using the chess-like mask yields better scores on two metrics. However, as shown in Appendix G, incorporating the mask is crucial for removing artifacts, so we continue to use it despite the metrics."
Please refer to our reply to reveiwer wkBK for the fully revised Ablation study subsection.
>W3. The presentation of Fig. 3 is not easy for understanding, I advise considering its redesign. For example, forward diffusion and taking imaginary part of FFT are denoted in the same way.
We have revised Figure 3 to enhance clarity. Please refer to the attached PDF for the updated version.
>W4. It is unclear from the text what the guidelines are for selecting the size for the zone of averaging for overlapping patches (line 805).
This is a set number (not a hyperparameter) that we set to 10 and never change. From our experiments, the exact number is not critical, which is why line 805 refers to a "few pixels." We have set this number to 10, and it remains fixed, as variations like 15 or 20 pixels do not produce noticeable differences. This can also be seen in the provided code (smoothed_time_mask = create_gradient_border(time_mask, gradient_width=10)). We understand that specifying the exact value is preferable, so line 804 has been updated to read: "... a tolerance of 10 pixels in the overlap ...".
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their feedback. | Summary: This paper proposes a training-free method for diffusion models to sample images at higher resolution with limited GPU memory, which introduces several tricks, such as fourier merging, chess-mask deduplication, and slider control. Experiments show the effectiveness of the proposed method.
Strengths: 1. The fourier merging seems interesting with imaginary part to maintain the global structure;
2. The chess mask seems effectively eliminates the duplicate artifacts in high resulotion generation;
3. The ablations are sufficient and comparisons show the superiority of the proposed method.
Weaknesses: 1. Is there any literatures to support that the imaginary part of the fourier transform corresponding to the low-frequency of the signal, or is there any deeper analysis beyond the ablation results in Tab.1 to illustrate this? By the way, it is suggested to provide the visual ablations of this part.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How to determine the patch orders in high-resolution generation, is it random orders or follow some principles.
2. From Tab. 2, it is not clear why Pixelsmith slower than Scalecrafter at 2048 resolution, while nearly 2x faster at 4096 resolution. Is there any trend that the bigger, the more efficient?
3. Is the content of the final scaled high resolution output significantly differ from the base output, whether the new objects will be generated?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive criticism. We welcome any comments that can help improve the quality of our work.
>W1. Is there any literatures to support that the imaginary part of the fourier transform corresponding to the low-frequency of the signal, or is there any deeper analysis beyond the ablation results in Tab.1 to illustrate this?
Please refer to the global response.
>W1. By the way, it is suggested to provide the visual ablations of this part.
Including visual ablations is indeed a good suggestion. They will be part of the final version.
>Q1. How to determine the patch orders in high-resolution generation, is it random orders or follow some principles.
The patch orders are random, as described in Lines 143 and 183. However, this randomness is controlled. We track which areas have been denoised (since each pixel is denoised only once per timestep) and avoid selecting those areas again. Therefore, while the selection of patches is random, it is constrained to areas that still require denoising. For example, if 90% of the latent space has already been denoised for timestep t, the coordinates for the next patches will be chosen from the remaining 10%.
If we selected coordinates purely at random without considering the denoised areas, the inference time would increase dramatically, rendering the process infeasible, especially for very high resolutions such as $32768^2$.
This explanation will be incorporated into the "Patch Sampling" subsection for added clarity.
>Q2. From Tab. 2, it is not clear why Pixelsmith slower than Scalecrafter at 2048 resolution, while nearly 2x faster at 4096 resolution. Is there any trend that the bigger, the more efficient?
We extract patches of the same size, specifically $128²$, regardless of the resolution. Pixelsmith requires 130 seconds for $2048^2$ (4194304 pixels) and 549 seconds for $4096^2$ (16777216 pixels). This represents a 4x increase in pixels and a 4.22x increase in inference time. Pixelsmith’s inference time scales linearly with the number of pixels because the denoising UNet consistently uses patches of the same size. In contrast, Scalecrafter scales differently as it does not use patches. Unfortunatelly, Scalecrafter's official code provides scripts for specific resolutions and no further experiments can be conducted to examine how they scale at different resolutions and most importantly there is also the constraints of the required memory. Pixelsmith apart from not being constrained by memory is very flexible. It is posible to generate an image with resolution 4096x1024 using a $1024^2$ base image. The final ratio is not constrained by the base one. To accomplish this we only change the final resolution (ie just one number), no script or otherwise optimization is needed.
>Q3. Is the content of the final scaled high resolution output significantly differ from the base output, whether the new objects will be generated?
Several factors influence the comparison between the final image and the base image. Key factors include the Slider position, the number of intermediate steps, the resolution of the final image, and the specific text prompt used. The Slider is designed to offer more control over the generation process. For instance, Figure 11 demonstrates how different Slider positions produce varying results, with position 49 being closest to the base image and position 1 showing excessive repetitions. However, in certain contexts, such as generating the surface of a planet with craters, having more craters can be beneficial. Thus, the final content can vary significantly or minimally, and this can be easily adjusted using the Slider.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer T7y4
Comment: Thanks for the authors rebuttal. Most of my questions have been addressed, however, the reason why the imaginary part of the fourier transform corresponds to the low-frequency of the signal is still not illustrated well with deeper analysis, and depend on the ablation experiments, which may undermining the potential inspiration, therefore, I will not raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comment! We appreciate the opportunity to continue the discussion.
>Most of my questions have been addressed, however, the reason why the imaginary part of the fourier transform corresponds to the low-frequency of the signal is still not illustrated well with deeper analysis, and depend on the ablation experiments, which may undermining the potential inspiration
As stated in our global response, all the references have been changed to reflect the concerns raised. For example, in Lines 187 and 248, where it was previously mentioned that the imaginary part of the Fourier transform corresponds to the low-frequency information of the signal, this has now been removed. In our revised paper, we do not state a connection between the imaginary part and the low-frequency information and we base our choice of the imaginary part on the results of the ablation experiments. | Summary: The paper introduces Pixelsmith, a framework designed to utilize pre-trained diffusion models to enable high-resolution image generation using only a single GPU.
Patch-based denoising ensures that the entire generation process can be accommodated on a single GPU.
The Slider mechanism balances the trade-off between finer details and overall structure by controlling the transition from guided generation to native generation.
Guided generation operate on the latent fused by one upscaled higher-res image patch and one native-generated higher-res image patch in Fourier space, helping to maintain global structures as claimed by the authors.
Experimental results show that Pixelsmith not only produces high-quality and diverse images but also improves sampling time and reduces artifacts compared to existing techniques.
Strengths: 1. Memory Efficiency with Limited Computational Resources: Achieving high-resolution image generation on a single GPU is a good contribution, and the image generation quality is not compromised by this restricted setting.
2. High-Quality Generation: The proposed Fourier space fusion maintains fine-grained details and global structures, while also preventing some artifacts that occur in other methods.
Weaknesses: 1. Paper Structure Issues: The content before the methods section is too lengthy, making the key method and experiments sections comparatively short and harder to fully understand. These are elaborated upon in the following points.
2. Method Description is Hard to Follow: For instance, the overview in Lines #171-173 does not fully describe all the components involved in the method, requiring readers to review the paper multiple times to understand the entire pipeline. Additionally, the intuition behind combining $\hat{z}^{iFFT}{t-1}$ and $\hat{z}^{guid}{t-1}$ is not explained, which is confusing since $\hat{z}^{iFFT}_{t-1}$ already incorporates information from $\hat{z}^{guid}_t$.
3. Errors and Mismatches in the Experiment Section: According to Table 1, using the real part is better, which contradicts Line #248. Line #252 describes the wrong column for "the next inference step". Also, the term “base model” in Table 2 is unclear—does it refer to the same model as Table 2’s Pixelsmith and Figure 1’s base model?
4. Flaws in Figure Illustrations: In Figure 1, the base model image for “x256 (16384x16384)” is not displayed. Is this base model the same as the one mentioned in Table 1? In Figure 3, the illustration should be more concrete and easier to understand; however, some variable names are too small and there are too many blank areas. The fusion part is also unclear, with no mention of the real part of the Fourier transform and no explicit explanation of $\mu( . , . )$.
Technical Quality: 2
Clarity: 1
Questions for Authors: The aim of achieving high-resolution image generation on a single GPU is commendable, and the proposed method successfully accomplishes this goal. However, the presentation quality is relatively poor, especially with some key parts described incorrectly. I hope the authors can address the questions mentioned in the weaknesses section and resolve my concerns.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors have discussed the trade-off between achieving finer details and suppressing artifacts. They have also suggested proposing appropriate metrics for evaluating high-resolution image generation, which would be beneficial for the community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive criticism. We welcome any comments that can help improve the quality of our work.
>W1. (full question)
We have restructured the content based on the suggestions. The Related Work and Foundations sections have been shortened. The Method section has been revised, as detailed in the global response. Additionally, the Experiments section has been expanded to enhance comprehension as follows:
> ### Ablation study
> We conduct a qualitative examination of the framework on $2048^2$ image resolution. Specifically, we assess the effects of the Slider position, the importance of the imaginary part, the significance of incorporating information from the next inference step using masking and the impact of averaging overlapping patches.
>
> **Slider Position (columns SP0, SP24, SP49):** Our findings indicate that the Slider position significantly influences the results. The proposed model with a Slider position of 30 (Table 1, column Proposed) outperforms positions 0, 24, and 49 (Table 1, columns SP0, SP24, and SP49). A position of 0 introduces numerous artifacts, while a position of 49 lacks fine detail. Position 24 is close to the proposed but not optimal for the random subset. Position 30 was chosen randomly and is not cherry-picked. Appendix F demonstrates the effect of the position with a qualitative example.
>
> **Imaginary Part (columns Re, Re\&Im):** The proposed model (Table 1, column Proposed) averages the imaginary parts of the guidance latents and the current latents, then uses the real part of the current latents as described in the Method section (see Figure 3). This setup is chosen based on experimental results demonstrated here. We compare this with averaging the real parts of the two latent spaces and using the imaginary part of the current latents to invert back to the pixel space (Table 1, column Re), as well as averaging both the imaginary and real parts from the two latent spaces (Table 1, column Re&Im).
>
> **Masking (column No Mask):** Analysis in column No Mask shows that not using the chess-like mask yields better scores on two metrics. However, as shown in Appendix G, incorporating the mask is crucial for removing artifacts, so we continue to use it despite the metrics.
>
> **Averaging (column No Aver.):** Finally, we show that using patch averaging (Table 1, column Proposed) improves scores compared to not using it (Table 1, column No Aver.). This is because patch averaging eliminates patch artifacts, as seen in Appendix C.
>
> Table1: A quantitative examination of our framework through ablations.
>
>| Metric | SP0 | SP24 | SP49 | Re | Re&Im | No Mask | No Aver. | Proposed |
>|---------------|--------|--------|--------|--------|--------|--------|----------|----------|
>(unchanged)
>W2. (full question)
We have revised the Method description to better align with Figure 3 (please see the global response). Additionally, we have updated Figure 3 (please see the attached PDF) to enhance readability and coherence. Regarding the second part of your concern, please refer to Appendix G for a qualitative comparison and the Experiments section for a quantitative comparison. Every time a denoising step finishes, there is a chance for artifacts to appear. The chess-like mask is applied after the denoising, replacing some information with the guidance where no artifacts exist.
>W3. Errors and Mismatches in the Experiment Section: According to Table 1, using the real part is better, which contradicts Line \#248.
In Lines \#247-248 we state that "Using the imaginary part to average the guidance latents and the current latents is more effective than using the real part or...". This statement is consistent with the results shown in Table 1, where all four metrics demonstrate better performance when using the imaginary part (last column) compared to using the real part (column Re).
>W3. Line \#252 describes the wrong column for "the next inference step".
You are correct, this was an error. Thank you for bringing this to our attention. We have now corrected it.
>W3. Also, the term “base model” in Table 2 is unclear—does it refer to the same model as Table 2’s Pixelsmith and Figure 1’s base model?
Throughout the paper we refer to SDXL as the base model (one of the references is in Line \#258: "SDXL, which serves as the base model"). In the Experiments section, we also referred to our final proposed model as the base model, which caused confusion. This inconsistency has now been corrected by replacing the base model in the Experiments with proposed.
>W4. Flaws in Figure Illustrations: In Figure 1, the base model image for “x256 (16384x16384)” is not displayed.
The higher resolution images are in scale with those generated by the base model. In the “x256 (16384x16384)” case, the base model image is not missing but hard to see due to the significant difference in resolution. As with the other images in Figure 1, the base model image is located at the bottom-right, overlaid on the higher resolution image. We understand this may confuse readers, so we have added a white box around all base model images to make them more distinguishable. The legend of Figure 1 has been expanded as follows: "... Pixelsmith and the base model. The higher resolution images are in scale with the images generated by the base model. The base model generations are enclosed in a white frame. Some cut-outs ..."
>W4. Is this base model the same as the one mentioned in Table 1?
Please refer to our response to W3.
>W4. In Figure 3, the illustration should be more concrete and easier to understand; however, some variable names are too small and there are too many blank areas. The fusion part is also unclear, with no mention of the real part of the Fourier transform and no explicit explanation of.
We have revised Figure 3 (please see the attached PDF). For a detailed explanation, please refer to the global response. | Summary: This paper introduces a framework for generating high-resolution images from text prompts using pre-trained diffusion models. The key innovations are: A cascading approach that uses lower-resolution generated images as guidance for higher resolutions. A "Slider" mechanism to control the balance between following the guidance and allowing novel generation. Patch-based denoising to enable generation of arbitrarily large images on a single GPU. Averaging techniques to reduce artifacts from patch-based generation. The authors demonstrate that Pixelsmith can generate images up to 32,768 x 32,768 pixels on a single GPU, outperforming existing methods in terms of quality and efficiency.
Strengths: The proposed method enables generation of ultra-high resolution images without additional training, addressing an important limitation of current models.
Good results.
The patch-based approach allows for generating massive images on consumer GPUs, which is a significant practical advantage.
Weaknesses: 1. Poor presentation. The writing of this paper needs substantial improvement to be published in a venue like NeurIPS. First, the language of the paper needs to be improved. Second, the paper spends a lot of space on unnecessary content. The method in this paper is not difficult, but it is very difficult to understand the method in one reading. In terms of writing, the authors did not succeed in emphasizing the core concept of their proposed method. In addition, the length of the paper is too long. Such a simple method, but the experiments are not introduced until page 8, which should not happen.
2. The core idea of this paper seems to be to generate once and then upsample the generated image by patch-wise processing. (By the way, Figure 3 is difficult to understand and Eq. 4 is not explained). This method is essentially not much different from other operations that use diffusion for upsampling. So what is the advantage of this method? Why is this method feasible?
3. The role of FFT Transformation is not well demonstrated. No exploratory experiments are used to demonstrate the motivation and effect of this method.
Technical Quality: 2
Clarity: 1
Questions for Authors: See Weaknesses please.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors claim they discussed the limitation but I didn't find it in the text. Correct me if I am wrong.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive criticism. We welcome any comments that can help improve the quality of our work.
>W1. Poor presentation. The writing of this paper needs substantial improvement to be published in a venue like NeurIPS. First, the language of the paper needs to be improved. Second, the paper spends a lot of space on unnecessary content. The method in this paper is not difficult, but it is very difficult to understand the method in one reading. In terms of writing, the authors did not succeed in emphasizing the core concept of their proposed method. In addition, the length of the paper is too long. Such a simple method, but the experiments are not introduced until page 8, which should not happen.
We carefully review and change the language to make it easier for readers to follow. The method section has been refined to highlight the core concept effectively. Additionally, we have edited the method figure to enhance the clarity. See 'global rebuttal' and the pdf with the Figure. Could you kindly provide specific examples where the language was not up to the NeurIPS level as it will help us better revise our paper?
The "Related work" and "Foundations" are shortened so that the "Experiments" can be introduced at page 7. Given the nature of the paper images take a lot of space (for example Figure 1 takes a full page) and as a result sections are pushed back.
>W2. The core idea of this paper seems to be to generate once and then upsample the generated image by patch-wise processing. (By the way, Figure 3 is difficult to understand and Eq. 4 is not explained). This method is essentially not much different from other operations that use diffusion for upsampling. So what is the advantage of this method? Why is this method feasible?
We have revised Figure 3 to make it easier for readers to understand. As mentioned in the global response, the revised "Method" section and updated Figure 3 are now better aligned, ensuring improved coherence.
Our method is fundamentally different from diffusion models used for upsampling. While diffusion models trained specifically for super-resolution generate high-resolution images from low-resolution inputs, our approach generates high-resolution images directly from text prompts. Diffusion models for upsampling primarily focus on the input image and largely ignore the text. In contrast, our method relies heavily on the text prompt, which drives the changes. As the patches denoise different areas of the latent space, the content of the text prompt becomes apparent.
Without our proposed method, numerous duplications would appear across the final image. Our framework restricts these duplications, and depending on the position of the Slider, some or a lot of new information will appear in the higher resolution that is not present in the lower resolution. For example, in Figure 1 (top right), the low-resolution cut-out box for the necromancer woman resembles a metallic connection without a distinct shape. Our method's patch denoising, considering the text prompt, reveals a skull, which makes sense for a necromancer. The patches attempt to denoise based on the text prompt but are constrained by the proposed key components to avoid, for example, denoising multiple necromancers all over the image. Current methods attempting similar results often end up with artifacts, which is not the case with Pixelsmith, as the Slider’s position can eliminate them.
In conclusion, diffusion models for upsampling lack this generative freedom. The downside of current works is the introduction of duplications and strange artifacts, but Pixelsmith provides control (via the Slider’s position) to avoid them.
This explanation will be reflected in the final version.
>W3. The role of FFT Transformation is not well demonstrated. No exploratory experiments are used to demonstrate the motivation and effect of this method.
The FFT transformation is used to fuse information between the guidance latent space and the current (higher resolution) latent space. The decision to use the imaginary part is based on empirical results. This is now reflected in our paper as mentioned in the global response. The effect is demonstrated in Table 1.
In Table 1:
- The column labeled **"Re"** represents the case where we averaged the real parts of both latents and used the imaginary part of the current latent.
- The column labeled **"Re&Im"** indicates that we averaged both the real and imaginary parts of the two latents.
- The final column shows the proposed method, where we averaged the imaginary parts of the two latents and used the real part of the current latent.
As shown, the proposed method in the last column performs better.
>L. The authors claim they discussed the limitation but I didn't find it in the text. Correct me if I am wrong.
Please refer to "6 Discussion and Considerations." One of the main limitations in generating higher-resolution images is the availability of effective metrics. More research is needed to develop methods that can accurately evaluate images without reducing their resolution to much smaller sizes, which results in the loss of detail. Additionally, there is no penalty for artifacts and duplications commonly found in many recent papers (see Figure 4 and Figure 6).
In the same section, we discuss Pixelsmith and the trade-off between preserving fine detail and suppressing artifacts. As noted, achieving true higher-resolution detail becomes increasingly challenging as the resolution increases, making it difficult to eliminate artifacts completely.
---
Rebuttal 2:
Title: Response to the rebuttal
Comment: I have read the author's response, as well as the comments and discussions with other reviewers. The author has partially addressed my concerns. However, I still think that the presentation of the paper is lacking at this stage. I will improve my score. However, since I cannot see the revised paper, I cannot judge whether the final presentation meets the requirements of NeurIPS.
---
Rebuttal Comment 2.1:
Comment: Thank you for the comment! We appreciate the opportunity to continue the discussion.
> The author has partially addressed my concerns.
Can you please tell us which concerns have not been addressed so we can attend to them as well?
> However, I still think that the presentation of the paper is lacking at this stage.
In our original reply, we kindly asked for specific examples where the language was not up to the NeurIPS level, as it is important to us to have this constructive feedback. Could we please extend the original request here and ask for specifics on where the paper is lacking as well? We have revised the entire paper and followed all suggestions from all reviewers. It would greatly benefit our work to know which parts are lacking, as we understand that this is the main (remaining) issue with the review. Examples of language before the revision (referring to W1) and the presentation after all the implemented changes would help our work, as we believe in the NeurIPS review process and the best practices to help promote research.
> However, since I cannot see the revised paper, I cannot judge whether the final presentation meets the requirements of NeurIPS.
As we cannot upload the full paper, I can kindly point you to the Method section, which is in the global response, and the Experiments section, which is a reply to reviewer wkBK. The Related Work and Foundations sections that were changed can be found below. The rest of the paper remains unchanged, so the full paper is now shown in our replies.
>
> # Related Work
> Pre-trained DMs are following a ... each new version [30]. It is clear that there is demand for increasingly higher resolution generation.
> Currently, generating images ... application.
> ## Trained models
> (unchanged)
> ## Adapted models
> (unchanged)
> # Foundations
> ## Diffusion models
> DMs [15,42] are probabilistic generative models that first add noise to a distribution during diffusion and then learn to remove this noise during denoising. During training, a Gaussian probability distribution is learned, and during inference, sampling from the Gaussian leads to the data probability distribution. Executing this process in the latent space [37] is more resource-efficient, allowing for faster training and inference times.
> In formal terms, for a Latent Diffusion Model, if $z_0$ represents the data point in the latent space, given $z_0 \sim q(z_0)$ and $q$ being the diffusion process, then for timesteps t $\in$ $\{1,T\}$ $z_1,\ldots,z_T$ noisy latents, with variance $\beta_t \in (0,1)$, are produced, defining a joint distribution conditioned on $z_0$:
>
>(unchanged equation 1)
>
>(unchanged equation 2)
>
> The training estimates an isotropic Gaussian distribution for $z_T$. During the denoising process we sample from $z_T \sim \mathcal{N}(0,{I})$ and remove the added noise using a neural network:
>
>(unchanged equation 3)
> ## Patch sampling
> The default denoising process of an LDM involves sampling the entire latent space at each timestep. While this approach works for lower resolutions, it becomes increasingly resource-intensive as the resolution increases.
> Instead, we modify the default process to denoise patches of a fixed dimension $128^{2}$ as introduced by DiffInfinite [1]. At each timestep, random patches are selected for denoising, and this process is repeated until the entire latent space is denoised (see Appendix Patch sampling). The DiffInfinite process relies on segmentation masks to condition each individual patch, providing rich spatial information. In a text-to-image DM, where the text-prompt is global for the entire latent space, using this method means that each patch is denoised with the same condition—the text-prompt. This leads to multiple repetitions of the condition and results in poor-quality generations. To address this, we implement a series of key components that enable scaling a pre-trained DM to resolutions never achieved before.
The above changes result in considerably smaller pre-Method sections, allowing the new, better-explained Method section to fit and enhance readability. The previous full text of this section will become a new appendix, titled "Patch sampling," with the addition of our explanation to question 2 of Rv T7y4. Nothing else changes or is added, so this appendix will not be new to you. | Rebuttal 1:
Rebuttal: We thank the reviewers (Rvs) for the valuable feedback and the opportunity to improve our work.
## Acknowledged Strengths
**Results:** All 4 Rvs agree that the paper achieves good results compared to current works. It's worth emphasizing that these works are published at the most prestigious conferences, underscoring the significant impact and competitiveness of our results
**Contribution:** Rv SvAB highlights the importance of our work, noting that it addresses an "important limitation of current models". This recognition emphasizes our paper's relevance in advancing the field and tackling key challenges. Further, Rvs SvAB, wkBK, and snt9 acknowledge the importance of our ability to use just 1 GPU regardless of the final resolution, identifying this as a significant contribution. We would like to add that our framework not only is memory efficient but the only one to scale a pre-trained diffusion model up to $32768^2$ ($\approx$1.1 gigapixel). Most current works show results up to $4096^2$ and a very few of them scale up to $8192^2$ which means that our model is able to generate $\times16$ more pixels than any current work
**Originality:** Rv snt9 commends our method as both novel and interesting
**Ablations:** Rv t7y4 finds that no further ablations are needed and that the introduced parts in the method are effective and interesting. We note that the rest of the Rvs also do not request extra experiments showing the thoroughness of our evaluation
## Key Concerns
**Fourier transformations:** Some Rvs have noted issues with lines 187 and 248 concerning the justification of using the imaginary part of the guidance latents and why we did not use the real instead. We observed that removing some of the data when fusing the two resolutions of the image helped to improve results and seemed to preserve low-frequency information well. The ablation studies compared Re+Im, Im only, and Re only. Im only performed best empirically. We thank the Rvs for highlighting this, as our observations were poorly formulated. We will change all references to reflect that it is based on empirical observations
**Presentation:** Some Rvs were concerned about the presentation. We have revised the organization to enhance clarity. The Related Work and Foundations have been reduced and the Method has been rewritten (see below a preview due to character limitations). Additionally, Fig. 3 has been revised to align more closely with the Method, ensuring a seamless understanding (f.i. in Fig. 3 the "3. Image guidance" aligns with "Image guidance" in Overview)
>## Framework
>In this section, we describe the workflow of Pixelsmith, detailing how the framework adapts pre-trained text-to-image LDMs to generate images with higher resolutions on a single GPU (see Fig. 3). In order to generate ultra high resolution images without artifacts, we introduce these key components: the Slider (see Slider), patch averaging (see Patch averaging) and masking (see Masking).
>
>### Overview
>
>**Text-to-image generation:** First, given a conditional prompt $c$, ... (line 174-176) SDXL.
>
>**Upsampling process:** After the image generation, we apply an upsampling algorithm ... (lines 176-179)
>
>**Image guidance:** Once the guidance image is encoded in the VAE's latent space $z_0^{guid}=\mathcal{E_\theta}(x^{guid})$, we can easily sample each latent variable of the diffusion process through the forward diffusion process $z^{guid}_t \sim q(z^{guid}_t|z^{guid}_0)$.
>
>**Image generation:** The generative process starts from $z_T \sim \mathcal{N}(0,I)$, which has the same dimensions as $z^{guid}_T$. At each step, a random patch is cropped as described in Section 3.2:
>
>(Eq. 4)
>
>where $\mathcal{C}^{i,j}$ crops the latent variables $z_t,z_t^{guid}$ at the coordinates $i,j$ for the patch sampling described in Section 3.2.
>
>The Slider’s position (see Slider), indicated by a blue line in Fig. 3, determines whether the guidance mechanism (see Guidance mechanism) or unguided patch denoising will be applied. In the unguided mode, each patch is based solely on the previous one, similar to a conventional patch denoising process. The Slider allows control over whether a generated image will be slightly or significantly altered compared to the previous resolution.
>After the denoising process has ended, the latents $z_0$ are decoded and the higher resolution image is generated. Using a cascade upsampling approach, this generated image can be upsampled again, repeating the process to achieve an even higher resolution image.
>
>### Cascade upsampling
>(unchanged)
>
>## Guidance mechanism
>The guidance mechanism fuses the $\hat{z}^{guid}_t$, $\hat{z}_t$ and $\hat{z}^{guid}\_{t-1}$ random patches to generate the $\hat{z}\_{t-1}$ patch. First, the $\hat{z}^{guid}_t$ and $\hat{z}_t$ patches are transformed to the Fourier space using a Fast Fourier Transformation ($\mathcal{FFT}$), where their imaginary parts $\mathcal{I}m$ are averaged:
>
>(Eq. 5)
>
>The imaginary part is then combined with the real part of the $\hat{z}_t$ patch to form $\hat{z}^{FFT}_t = \mathcal{R}e(\hat{z}_t) + i \hat{z}^{im}_t$
, which is then transformed back to the spatial domain:
>
>(Eq. 6)
>
>The output is then used as the condition for the reverse diffusion process
>
>(Eq. 7)
>
>To prevent further prompt duplications across the entire latent, we use masking.
>
>### Masking
>
>We combine the sampled $\hat{z}^{iFFT}\_{t-1}$ with the image guidance $\hat{z}^{guid}\_{t-1}$ using a chess-like mask $\Lambda$
>
>(Eq. 8)
>
>### Patch averaging
>
>Due to overlapping patches, visible distinctions can sometimes be noticed at the borders of the patches (Appendix C). To eliminate these, we create a zone where the patches meet and take the average of both patches to achieve a seamless denoised result.
>
>### Slider
>(unchanged)
Masking, Patch averaging and Slider will also include Appendices G, C and F
Pdf: /pdf/b25d858882c4ed709a79054b2209d8e7d381ba81.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search | Accept (poster) | Summary: This paper proposed a black-box jailbreaking framework, RLbreaker, it use Reinforcement learning to help the optimization of jailbreaking prompt.
At training each step, the designed agent of RLbreaker will select a mutator from a small set, and then the helper LLM will use the selected mutator to enhance the prompt. The optimized prompt will be feed to victim LLM and an un-aligned LLM, the framework will use the difference between 2 responses to calculate the reward for next training step.
The experiment section include general attack and transfer attack, as well as attack against Jailbreaking Defenses.
Strengths: Compare with the most popular baselines (i.e. the GCG and AutoDAN), the framework proposed by this paper decrease the search space so that the training speed is fast.
The structure of the paper is very clear and it is easy to understand the framework.
The section 4, consists of 3 kinds of evaluation (general/transfer/ablation), is reasonable.
Although the action base is small, but in the ablation section, the results show that randomly select action will lead a significant performance drop. This proves that the agent is definitely effective.
Weaknesses: All experiments are done on a 300-samples test set, and it is the same distribution as the training set. Therefore hard to prove its robustness and effectiveness.
Technical Quality: 2
Clarity: 3
Questions for Authors: How did you determine the action space you ultimately chose?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The reviewer points out that all experiments are conducted on a 300-sample test set, which has the same distribution as the training set, making it difficult to prove the method's robustness and effectiveness.**
We thank the reviewer for the question. When splitting the training and testing set, we avoided putting similar templates in both training and testing to encourage enough differences and avoid data leakage. To further demonstrate the generalizability of our method, we conducted an additional experiment. We first trained our agents on the training set, following the same setup in Section 4. Then we tested our agents on the other two harmful question sets. One is from our baseline: GPTFUZZER, where they construct 100 questions from scratch. The other is called MaliciousInstruct from [2], where they claim it is a different dataset from AdvBench that also contains 100 harmful questions. We select Llama2-7b-chat as the target model and GPT-3.5-Turbo as the helper model, and the results are in Table 9 in the submitted pdf file.
We can see that after applying our RL agent on the other two testing sets, the performance even gets improved, compared to the original AdvBench testing set. Furthermore, we showed the transferability of our agents across different models, which also demonstrates the generalizability and robustness of our approach.
**The reviewer asks how the authors determined the chosen action space.**
We thank the reviewer for the question. When designing the action space $\mathcal{A}$ of an RL agent for our jailbreaking attack, the requirements of the action space $\mathcal{A}$ are two. First, it should enable diverse and substantial changes to the prompts. Second, $\mathcal{A}$ should not be ultra-large, which will make it difficult for the agent to learn an effective policy.
Before designing our action space by selecting different mutators, we made an initial trial of designing the action as selecting different tokens from the whole vocabulary, and the state is the current prompt. We refer to this design as “token-level RL” and the details are in Section D.4 in Appendix. Our experiment results demonstrated that this token-level RL design cannot generate any effective jailbreaking prompts. The reason is that the action space is too huge for the agent, which is the same as the vocabulary size (about 30,000). Thus, considering the maximum length of the generated jailbreaking suffix as N, the possible combination is 30,000^N, which is unrealistic for an agent to learn some effective policy.
Then, we took a different avenue and considered mutating the input prompts at the sentence level. We selected some common ways of mutating a sentence as our actions that enable enough changes to the input while constraining the total action space. Note that our framework is flexible and can incorporate more actions in the future.
[1] Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. Yu et al., arXiv 2024.
[2] Catastrophic Jailbreak of Open-Source LLMs via Exploiting Generation. Huang et al., ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I will keep the Rating since it is already the highest.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for supporting the paper and maintaining the highest rating. We will update our paper based on the suggestions. | Summary: This work proposes a new method to jailbreak LLM to elicit harmful respones. It adapts deep reinforcement learning to learn a policy of sampling jailbreaking operations (modifying prompt) from a predefined pool. A LLM is then used to rewrite the query prompt complying with the sampled operation. To guide the policy learning, it proposes a new reward function that compares the difference between the target model's response and a reference response. The empirical results confirm the efficacy and the efficiency of the proposed method.
Strengths: 1. the studied problem, jailbreaking LLM, is an important topic of AI safety.
2. although applying DRL to text optimization is not novel, the way how different components are designed and put together in this work is great and interesting.
3. the efficacy improvement of the proposed method is large in some cases but not all.
Weaknesses: 1. For black-box evaluation, GPT-3.5-turbo may be dated. GPT-4 is recommeded.
2. I am a little concerned about the results of GPT-Judge since it shows an inconsistent scale compared to Sim. For example, for Llama2-70b-chat in Tab. 1, the performance improvement of the proposed method indicated by GPT-Judge over the previous methods is dramatically larger than that indicated by Sim. The similar cases also occur in Tab. 2 an Tab. 3.
3. The training of RL agent may be instable. How many runs of training did authors test to report the results e.g. in Tab.1. How much is the variance?
4. For the results under Sim., the proposed method sometimes underperform the previous works, e.g., on Mixtral in Tab.1. Similar situations also occur in Tab.2 and Tab.3. Do authors have some insights about why?
5. The efficiency the proposed method is a concern. As shown in Tab. 8, the proposed method, even though more efficient than AutoDAN and GCG, costs much more than other blackbox methods on many models.
Technical Quality: 3
Clarity: 3
Questions for Authors: see the Weakness above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The reviewer suggests that GPT-3.5 Turbo may be outdated for black-box evaluation and recommends using GPT-4 instead.**
We thank the reviewer for this suggestion. We evaluate our attack on GPT-4, following the same experiment setup in Section 4. We select the latest GPT-4o-mini (07/18/2024) as the target model and limit the total queries to 10,000. For baselines, we select GPTFUZZER as its performance is better than the other black-box attacks. The results in Table 6 in the submitted pdf clearly demonstrate RLbreaker’s superior ability to bypass strong alignment compared with baseline. The reason why we did not do a large-scale experiment on GPT-4 is in consideration of cost.
**The reviewer also raises concern about the results of GPT-Judge, noting an inconsistent scale compared to Sim.**
We thank the reviewer for pointing this out. We believe this is because the effective value range for Sim. is smaller than GPT-Judge. As we observed during the experiments, Sim. gives at least a 0.6 score even for a rejected answer. However, the GPT-Judge metric spans a full range from [0, 1].
We chose different text encoders for GPT-Judge and got consistent observation, as we demonstrated in our ablation study in Section 4.
Furthermore, we use the bge-large-en-v1.5 from Hugging Face to convert text into embeddings for cosine similarity computations. As noted by the model developers, the typical similarity distribution for this model falls within [0.6, 1], which also verifies our previous observation. Across Tables 1, 2, and 3 in Section 4, we observe that the Sim. for RLbreaker and other baselines exceed 0.6.
We also want to note that as discussed in [1], GPT-Judge evaluates responses in a more comprehensive manner, considering contextual relevance, coherence, and the capacity to meaningfully answer the posed questions. The significant improvements highlighted by GPT-Judge demonstrate that our RL-guided approach produces responses that are more contextually relevant and accurate. It is also important to note that we have not tailored the GPT-Judge prompt specifically for our attack; instead, we employ a prompt directly sourced from existing published work. This approach ensures the generalizability of our method.
[1] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. Guo et al., ICML 2024.
**The reviewer raises concerns about the instability of training the RL agent, asking how many training runs were conducted to report the results in Tab. 1 and what the variance is.**
We thank the reviewer for this question. Our training runs for the RL agent are constrained by a limit of 10,000 queries to the target model. Specifically, we set the number of forward steps in the environment as 16, i.e., we will update our agent after we forward 16 steps in the environment. We will use the selected trajectories to update the agent with our customized PPO algorithm and the training epoch is 4, thus, the total training runs of our agent is 2500 times. We will include all training details in our appendix in the next version.
To understand the variance of our training process, we trained the agent five times using different random seeds on the target model llama2-7b-chat with GPT-3.5-turbo as the helper model. The results in Table 7 and Figure 1 in the submitted pdf show low variance during training and testing across these seeds. We believe the reason is that we constrain the action space. Instead of letting the model choose a token within the vast token space, we let the model mutate the input templates, which gives a much smaller search space and thus more stable agents.
**The reviewer points out that under Sim., our proposed method sometimes underperforms compared to previous works, such as on Mixtral in Table 1, with similar observations made in Tables 2 and 3.**
We thank the reviewer for pointing this out. We suspect that this is because the Mixtral model answers the questions in a way different from the reference answers (given by the Vicuna model). When comparing the answers given by the Mixtral model with reference answers, it is possible that some answers are given a low Sim. because Mixtral answers a question in a very different way from the reference answer. However, when using GPT-Judge, the attack success rate is higher.
As we have discussed in Section 4, Sim. indeed can introduce false negatives when the target model answers a question in a very different way from the reference answer. This is the reason why we used multiple metrics in the evaluation. However, we still use Sim. during the training mainly because it is more efficient than GPT-Judge and introduces fewer false negatives than keyword matching.
**The reviewer raises concerns about the efficiency of our proposed method.**
We thank the reviewer for the question. Tab. 8 in our paper shows that RLbreaker is more efficient than AutoDAN and GCG and comparable to GPTFUZZER. Although our method is slower than PAIR and Cipher, these two methods are way less effective than ours. We designed an additional experiment to show the efficiency of our method over those baselines. To make these two methods have comparable performance with our method, we need to significantly increase the total number of queries. Specifically, for Cipher, we executed their jailbreaking prompts up to 50 times per question, considering it as a success if any trial resulted in a successful jailbreak. This led to 16,000 queries at most. For PAIR, we set their two key hyper-parameters: number of iterations and number of streams as 3 and 20 separately, resulting in a maximum of 19,200 queries. Results in Table 8 in the submitted pdf file show that these two methods are actually less efficient than ours if we aim to achieve a similar attack effectiveness.
---
Rebuttal Comment 1.1:
Comment: Thanks much for your thorough responses. Most of my concerns have been addressed, so I decide to raise my score to 6. However, I still have concern about the faithfulness of evaluation metric, GPT-Judge. I understand that this is a previously accepted practice. I personally question it because of the existing observation on the instability of LLM in answering the question when it is written in different prompts of the same meaning like [1].
[1] Zong et al., Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations, ICML 2024.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer for updating the score! We are happy that our response could help address the reviewer's concern.
Regarding the reliability of the GPT-Judge metrics, we agree with the reviewer that it is not an entirely stable metric for assessing the success of a jailbreaking attack. However, as shown in both [1] and [2], GPT-4 demonstrates a high correlation with human judgment (0.9), suggesting its potential utility as a reliable verifier for responses provided by victim LLMs. While human annotation remains a more accurate method of judgment, it is costly given our experimental setup, which requires evaluating 320 questions in the test set for 5 baselines and our method. Therefore, we believe GPT-Judge serves as a cost-effective and efficient alternative for evaluating attack effectiveness. Furthermore, we ensure consistency in our evaluations by applying the same metrics across all baseline and our method.
In our future work, we will carefully consider the reviewer's suggestions and explore more strategies to enhance the reliability and stability of our judgment metrics. For example, integrating results from a harmful content classifier [3] alongside GPT-Judge, or grouping evaluations from multiple LLMs and perform a majority vote.
[1] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. Guo et al., ICML 2024.
[2] AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models. Liu et al., ICLR 2024.
[3] HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal. Mazeika et al., arXiv 2024. | Summary: This paper proposes a new jailbreaking attack on LLMs with deep-reinforcement learning (DRL) techniques, which takes jailbreak prompts as states and mutations as actions.
Strengths: 1. This paper is the first to leverage DRL techniques to jailbreaking LLMs, bringing new insights to this community.
2. The experiments include multiple LLMs and attack baselines.
3. The evaluation considers ablation studies on each part of the proposed method, showing the robustness against hyper-parameters of the DRL training.
Weaknesses: 1. The organization of this paper can be substantially improved to polish readability. For example, Section 2 failed to discuss the background (deep) reinforcement learning and their related work on jailbreaking.
2. Some technique details of the method are not specified. For example, in line 164, which mutators are used are not clearly introduced. Instead, the authors simply use a reference here.
3. The plausibleness of the reference answer $\hat u_i$ derived from an unaligned LLM should be further justified. Specifically, the weakly aligned vicuna-13b does not always respond to harmful prompts. What happens if both the target LLM and this model refuse to answer a harmful query?
4. The experiment does not indicate the training/inference time required for the method.
5. The experiment results show that the improvement of the proposed method over existing methods is somewhat limited.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The reviewer suggests improving the paper's organization to enhance readability. For example, Section 2 lacks discussion on the background of DRL and related work on jailbreaking.**
We thank the reviewer for the suggestions. Due to the space limit, we did not add a background section for DRL. Instead, we introduced the key formulations of our DRL system in Section 3. We will add a background of DRL (in the appendix) in the next version. Regarding related works on jailbreaking, we would like to kindly point out that we have included a comprehensive literature review in Section 2, with a special focus on black-box attacks.
To the best of our knowledge, there are no existing works that specifically apply DRL to jailbreaking attacks, although other works apply RL to attack LLMs with different attack goals and threat models, as discussed in Section 2. All these works used RL to directly generate tokens, which is similar to the token-level action baseline in our ablation study (denoted as Token in Figure 2 in our paper). As we demonstrated in our experiments, the action space is too huge for this type of method to work in jailbreaking problems.
**The reviewer also suggests that the plausibility of the reference answer from an unaligned LLM needs further justification. The reviewer questions what would occur if both the target LLM and this model refuse to answer a harmful query.**
We thank the reviewer for their questions. First, we would like to clarify that we utilize an uncensored version of the Vicuna-7b model instead of a weekly aligned Vicuna-13b to obtain reference answers. This uncensored Vicuna-7b model has not been fine-tuned with human preferences and lacks guardrails, ensuring it responds to user prompts irrespective of potential harmfulness. We also manually double-check all the reference answers and ensure they are indeed answering the questions.
Additionally, we assume that even if there are any questions that are rejected or deemed irrelevant, the RL agent can still learn an effective strategy about how to select the mutators during the interaction with other questions. To test this assumption, we randomly marked 10% and 20% of the reference answers as unavailable, by setting them as ''I’m sorry I cannot assist with this request''. This also mimics the case when the unaligned model refuses to answer the question. Then we trained our RL agent on Llama2-7b-chat. The results in Table 4 in the submitted pdf demonstrate that even though some of the reference answers are unavailable, our method still achieves good performance on jailbreaking the model.
We will address these points in our next version and we will publish all reference answers used in our experiments upon acceptance.
**The reviewer raises concern that some technical details of the method are missing, e.g., details of mutators, and the training/inference time required for our method.**
We thank the reviewer for the question. We would like to kindly point out that in Section 3, we detail the five mutators we use, including their name and how we conduct the mutation. We also include the LLM that we use to perform the mutation. The prompts for each mutator are shown in Tab. 4 in our paper. Regarding the training and inference time, we consider the total time (training + inference) as one of our efficiency metrics, as we discussed in Section 4, and the results are in Tab. 8 in our paper. We also separately report the training and inference time of our method in Table 5 in the submitted pdf. We will include these results in our next version.
**The reviewer comments that the experimental results show our method has limited improvement over existing methods.**
We thank the reviewer for the question. In our experiments, we conducted a comprehensive comparison of our method against five SOTA baselines. These comparisons and metrics were carefully designed to fairly evaluate the effectiveness of our methods.
To address the reviewer’s concern specifically:
Regarding the baselines we choose, we carefully select the four SOTA black-box jailbreaking attacks that cover the genetic method-based attacks and in-context learning-based attacks. We also include the representative white-box attack: GCG. Then for the evaluation metric, we leverage four metrics for a comprehensive attack effectiveness evaluation while existing works usually only leverage one or two. Among those metrics, we consider that a higher GPT-Judge score is enough to show the advantages of our methods over other attacks, as GPT-Judge has a higher correlation with human annotations, as demonstrated by [1]. We also directly adopt their judging prompt to ensure a fair comparison, it also guarantees that we are not designing our attack specifically our own designed prompt.
Our method achieved a significant improvement in the GPT-Judge score, which is a critical metric for assessing the effectiveness of the attacks. Specifically, our approach showed about 40% improvement on Llama2-70b-chat over the best-performing baseline. This is a substantial enhancement, demonstrating the efficacy of our proposed attack. Our attack also achieves the highest GPT-Judge score across all six target LLMs, compared to baseline methods. We also have provided detailed tables (Tab.1 and Tab.6 in our paper) that explicitly illustrate these advancements. We also performed a thorough ablation study to validate the core designs of RLbreaker.
[1] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. Guo et al., ICML 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks to the Reviewer ipyX again for the insightful comments and questions. Since the discussion phase is about to end, we are writing to kindly ask if the reviewer has any additional comments regarding our response. We are at their disposal for any further questions. In addition, if our new experiments address the reviewer's concern, we would like to kindly ask if the reviewer could reconsider their score.
---
Rebuttal 2:
Comment: Thanks for the rebuttal, your efforts are truly appreciated. I've raised my rating to 5. Some comments:
- I know that there are no existing works that specifically apply DRL to jailbreaking attacks (see strengths); what I requested here is to discuss how (deep) reinforcement learning has been used to attack ML models / LLMs. Such connections can help the readers better understand to what extent these techniques have been leveraged in the adversarial ML area.
- The limitation on the requirement of manually double-checking all the reference answers is a limitation that needs to be explicitly acknowledged.
- It would be great to evaluate stronger attack/defense baselines, like the demonstration-based in-context attack/defense (https://arxiv.org/abs/2310.06387).
---
Rebuttal Comment 2.1:
Comment: Thanks to the Reviewer ipyX for their insightful comments. We really appreciate the reviewer for raising their score to acknowledge our effort. Below, we would like to provide some clarifications for the reviewer's additional comments.
1. We totally agree with the reviewer that discussing how DRL has been used in attacking LLMs if not for jailbreaking purposes is super helpful. We have a short summary of existing works in Section 2. We also experiment with a baseline (token-level RL) that is generalized from existing works in our evaluation. We will further emphasize these in our paper.
2. We would like to clarify that we did a manual check just to assess the quality of the unaligned models in generating reference questions. This is a one-time effort. It is like an ablation study or sanity check and is not required to run our method.
3. Moving forward, we will follow the reviewer's suggestion to add new baselines, including the one pointed out by the reviewer. | Summary: This paper introduces RLbreaker, a novel deep reinforcement learning (DRL) approach for generating jailbreaking prompts to attack large language models (LLMs). The authors frame jailbreaking as a search problem and design a DRL agent to guide the search process more efficiently than existing stochastic methods. Key technical contributions include a customized reward function, action space based on prompt mutators, and a modified proximal policy optimization (PPO) algorithm. Through extensive experiments, the authors demonstrate that RLbreaker outperforms state-of-the-art jailbreaking attacks across multiple LLMs, shows resilience against existing defenses, and exhibits strong transferability across models. The paper also includes ablation studies validating the key design choices.
Strengths: - Good results on Llama-2-70B (52.5% ASR).
- The proposed method makes sense, and, to the best of my knowledge, no one has proposed using an RL algorithm for guiding jailbreak search.
- The transferability of the attack is non-trivial.
Weaknesses: I have concerns about the reported numbers:
- Why is GCG shown as N/A for GPT-3.5 Turbo? Evaluating the result of a transfer attack would still make sense (and I would expect that the ASR according to GPT-4 as a judge should be above 50%).
- Moreover, why does PAIR perform so badly on GPT-3.5 Turbo? You report 9% ASR while the original paper reports 60% ASR (but on a different set of AdvBench requests). Similarly, Mixtral is known to be a non-robust model, so it’s surprising to see that PAIR achieves only 15% ASR on it.
- Why does the ASR according to the GPT judge is significantly higher when the perplexity filter is enabled (Table 2: 69.1%, Table 1: 52.5%)?
Other concerns:
- Minor: “However, the random nature of genetic algorithms significantly limits the effectiveness of these attacks.” - why?
- Minor: Tables 2 and 3 have a too small font.
I'd be willing to increase the score if the concerns about the reported numbers are resolved.
Technical Quality: 2
Clarity: 2
Questions for Authors: See the questions about the reported numbers.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The reviewer raised concerns regarding the N/A values from GCG for GPT-3.5 Turbo in Tab.1. It could be run on other LLM and obtain the adversarial prompts to do a transfer attack.**
We thank the reviewer for the suggestion. Following the suggestions, we added an experiment to test GCG's performance on GPT-3.5 Turbo. We followed the original GCG setup and used two different models (Vicuna-7b, Vicuna-13b) as the source models to generate the jailbreaking suffixes. Then, we used these suffixes to attack GPT-3.5 Turbo. We followed the same setup in our Section 4 and set the upper bound for the total query times as 10,000. The results in Table 1 in the submitted pdf indicate that when making a transfer attack on GPT-3.5 Turbo, GCG cannot outperform RLbreaker.
**The reviewer then raised concerns regarding why PAIR’s ASR on GPT-3.5 Turbo is lower than what is reported in the original paper and why its ASR on Mixtral is only 15%.**
Thanks for the question. The main reason for a low PAIR performance is that we set an upper bound for the total query times as 10,000 for the entire attack process. The reason is to enable an apple-to-apple comparison for all methods. Given that some methods have a training process but others do not. For PAIR, this limit maps to the action that we set the 10,000 queries for all 320 questions (because PAIR does not have a training process. So all the queries are directly applied to the testing phase). Following this constraint, we set their two key hyper-parameters: number of iterations and number of streams (the number of parallel conversations) as 5 and 6 separately, while they set the number of streams as 20 in the original paper and number of iterations as 5. PAIR also mentioned that the total number of queries had a significant influence on the performance of jailbreaking attacks. Thus this reduction in parallel streams is a primary factor contributing to the lower ASR observed in our results compared to its original paper. Second, we followed [1] and used a different way of writing the GPT-Judge prompt from PAIR.
We added a new experiment that follows the same setup with the PAIR, setting the number of iterations as 5 and the number of streams as 20, which will lead to 32,000 upper bound for the total number of queries. We then ran PAIR and RLbreaker with the new upper bound. We selected Mixtral-8*7B and GPT-3.5-Turbo as the target model and we reported the ASR measured by our GPT-Judge metric and JUDGE function proposed by PAIR. Results in Table 2 in the submitted pdf file demonstrate that RLbreaker still outperforms PAIR across two different evaluation metrics.
[1] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability. Guo et al., ICML 2024.
**The reviewer asked why the ASR using GPT judge is higher when the perplexity filter is enabled.**
We thank the reviewer for the question. We would like to kindly point out that for Table 1 in our paper, we are evaluating Llama2-70b-chat, while for Tab. 2, we focus on Llama2-7b-chat. To address the reviewer's question further, in Table 3 in the submitted pdf, we showed the ASR for Llama2-70b-chat model specifically when a perplexity filter is applied. The result aligns with Tab. 2 in Section 4, demonstrating RLbreaker’s resiliency against the perplexity defense.
**The reviewer also asked why the random nature of genetic algorithms significantly limits the effectiveness of these attacks**
We thank the reviewer for raising this insightful question. As we briefly discussed in Section 3, the limitations of genetic algorithms in developing jailbreaking attacks are two-fold.
Inefficiency in Search Process: Stochastic search methods, including genetic algorithms, initiate with a randomly chosen initial region and explore this region randomly before moving to other areas. This process involves random mutation and selection, which leads to a highly inefficient search process. As demonstrated in the grid search example in Appendix B.2, stochastic search requires at least three times more grid visits compared to guided search, highlighting its inefficiency.
Constraints of Random Mutation: In the context of jailbreaking attacks, existing methods that employ genetic algorithms iteratively generate new prompts by randomly selecting mutators to modify the current prompts. This randomness in mutator selection significantly constrains the search efficacy, as it often directs computational resources toward less promising areas of the search space. This approach is particularly ineffective in the expansive search spaces common in jailbreaking scenarios. Furthermore, after each selection of the mutators, the absence of informative feedback means that those genetic algorithm-based attacks cannot effectively utilize prior knowledge or feedback. In contrast, DRL-guided searches benefit from RL agents that prioritize actions leading to successful outcomes, driven by the accumulation of rewards.
As a result, the random nature of genetic algorithms limits their effectiveness in jailbreaking attacks primarily due to their inefficient exploration of the search space and the significant computational overhead involved in randomly selecting mutators. This inefficiency is especially problematic in large search spaces, leading to constrained search efficacy and reduced overall effectiveness.
We will add more discussion in the next version.
**Other minor issues.** We thank the reviewer’s suggestion and we will increase the font size of Table 2 and Table 3 in our next version.
---
Rebuttal Comment 1.1:
Title: Follow-up comment
Comment: Thanks for the detailed response, it addresses my concerns. I increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for increasing the score and we are happy that our response could help address the reviewer's concern. As we proceed with the revision, we will be mindful of the suggestions and present a stronger version of our paper based on the reviewer's feedback. | Rebuttal 1:
Rebuttal: We thank the reviewers for the constructive feedback. Below, we summarize our responses:
**New experiments:**
We added all experiments suggested by reviewers (All the results are in the submitted PDF). Below, we give a brief summary.
1. **GCG transfer attack on black-box model.** We demonstrated the jailbreaking effectiveness of RLbreaker compared to GCG on GPT-3.5-Turbo. We followed GCG and used Llama2-7b-chat to generate jailbreaking suffixes for GCG (Reviewer pYsF).
2. **Compare our method with baselines under a higher query limit (in response to why existing methods have a low performance).** We demonstrated the jailbreaking effectiveness of RLbreaker vs. PAIR on Mixtral-8*7B-Instruct, with a query limit of 32,000. The effectiveness is measured using GPT-Judge and JUDGE metrics from PAIR (Reviewer pYsF).
3. **Our method on Llama2-70b-chat when perplexity defense is added.** We added the results of Llama2-70b-chat when perplexity defense is added (Reviewer pYsF).
4. **Robustness against reference answers.** We demonstrated the robustness of our methods when some of the reference answers are not available (Reviewer ipyX).
5. **Training and inference time of our method.** We reported the training and inference time (in minutes) for RLbreaker across six target LLMs (Reviewer ipyX).
6. **Our method’s performance on jailbreaking the latest GPT-4o-mini.** We demonstrated RLbreaker’s effectiveness in jailbreaking the latest GPT-4o-mini model compared to the best-performing baseline (Reviewer 5ZZd).
7. **Training and testing stability.** We demonstrated RLbreaker’s stability across different random seeds, including training curves of mean rewards and the statistics of GPT-Judge score (Reviewer 5ZZd).
8. **Our method’s performance when the reference answers are obtained using different models (in response to why our method has lower Sim. on Mixtral).** We demonstrated RLbreaker’s jailbreaking effectiveness when we vary the model that is used to obtain the reference answers (Reviewer 5ZZd).
9. **Performance on some out-of-distribution question sets.** We demonstrated RLbreaker’s robustness on two out-of-distribution testing question sets (Reviewer 5U1Z).
Below, we also summarize the key points in our responses:
**Reviewer pYsF**
1. We clarified and demonstrated that by setting a larger upper bound for the total number of queries, the baseline method can meet their ASR in their original paper, and our method can also achieve better results.
2. We clarified why the random nature of genetic method-based attacks significantly limits the effectiveness of jailbreaking attacks.
**Reviewer ipyX**
1. We clarified and pointed out our discussion on the related works of jailbreaking attacks and we will follow the reviewer’s suggestion of adding another section about the background of deep reinforcement learning (DRL).
2. We clarified that we have included all required details of our designed mutators and we also reported the training and inference time separately for our method.
3. We clarified why our method introduces significant improvements over existing baselines.
**Reviewer 5ZZd**
1. We clarified why there is an inconsistency between GPT-Judge and Sim.’s scale, highlighting the reliability of GPT-Judge over Sim. and the significant improvements that RLbreaker introduces.
2. We clarified that the increased expense of RLbreaker is worthwhile as our method generates more powerful jailbreaking prompts thus enhancing LLM safety and alignment through effective adversarial training.
**Reviewer 5U1Z**
1. We clarified how we determine our action space by detailing the selection criteria and the underlying methodology.
Pdf: /pdf/82fd7fa4c4fd79e70b6613d28c26d565c9d76408.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hierarchical Programmatic Option Framework | Accept (poster) | Summary: This work builds off of deep reinforcement learning generating programmatic policies, and adapts it to solve long-horizon, repetitive tasks. Concretely, this work proposes HIPO, short for hierarchical programmatic option framework, which retrieves history programs by its neural embeddings, and uses these programs as reusable options to solve reoccurring tasks.
Strengths: 1) Effective approach: This work tackles an intuitively challenging problem and proposes a practically effective solution. Particularly on long-horizon tasks, as shown by the results in Karel-long in Table 2, demonstrates the effectiveness of the proposed method.
2) Thorough ablation studies: the paper conducts various ablation studies, and show the optimal performance of the proposed approach among other variants.
Weaknesses: 1) Lack of retrieval evaluation: The paper proposes special techniques to improve the effectiveness, diversity, and compatibility of retrieved programs, however, is there any empirical evidence to show that the designed modules do improve the effectiveness, diversity, and compatibility of programs?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) While generating reusable subroutines is intuitive, there are no examples on how these low-level policies look like throughout the paper, making it a bit harder to concretely visualize how the policies look like. Adding a few examples, even in appendix, would make the process more interpretable.
2) This work chooses a particular DSL (as introduced in Section 3), does the choice of DSL affect the performance of the proposed process? Would a less well-designed DSL cause the method to be ineffective?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
>Is there any empirical evidence to show that the designed modules do improve the effectiveness, diversity, and compatibility of programs?
We thank the reviewer for this insight. We discuss the evaluation of effectiveness, diversity, and compatibility below.
- Effectiveness: The ablation study in Section 5.3 and the experimental results presented in Table 1 serve as empirical evidences that demonstrate the efficacy of the proposed CEM+diversity searching paradigm.
- Diversity: Section 5.7 (Figures 21, 22, 23, and 24) provides multiple examples of the retrieved option program sets that cover diverse skills and subroutines for each given task.
- Compatibility: As suggested by the reviewer, to further investigate and quantify the compatibility among the retrieved programs, we conduct additional experiments of CEM+compatibility ×|M| (i.e., CEM with the evaluation function $G(z) = \frac{1}{D} \sum_{i=1}^{D} R_{\Psi_i}$, where the number of programs sequences $D$, the specified program sequence $\Psi_i$ and the normalized reward $R_{\Psi_i}$ are defined similarly as in Section 4.2.3 and Equation 1) for N = 1 time and select each of the result as the i-th option. Repeat the above process |M| times and take all |M| results as the set of programmatic options.
| Method | Seesaw | UP-N-Down | Farmer | Inf-DoorKey | Inf-Harvester |
|-|-|-|-|-|-|
| CEM × \|M\| | 0.06 $\pm$ 0.10 | 0.39 $\pm$ 0.36 | 0.03 $\pm$ 0.00 | 0.11 $\pm$ 0.14 | 0.41$\pm$ 0.17 |
| CEM+compatibility × \|M\| | **0.23** $\pm$ 0.32 | **0.43** $\pm$ 0.34 | **0.14** $\pm$ 0.22 | **0.57** $\pm$ 0.3 | **0.66** $\pm$ 0.08 |
The results show that, across every task listed above, CEM+compatibility × |M| outperforms CEM × |M|, demonstrating the effectiveness of employing the proposed compatibility measure.
We thank the reviewer for inspiring us to conduct this experiment, and we will include the results in the revised paper.
>While generating reusable subroutines is intuitive, there are no examples on how these low-level policies look like throughout the paper, making it a bit harder to concretely visualize how the policies look like. Adding a few examples, even in appendix, would make the process more interpretable.
We provide multiple examples in Figures 21, 22, 23, 24 in the appendix. These examples serve as concrete examples that explain what the agent will do by following these programmatic options. Taking Inf-DoorKey as an example, this task requires iterating between three different skills.
- The first skill is picking up the key in a chamber to open the door. Option 1 of this task, shown in Figure 23, is suitable for acquiring this skill since this program will check whether a key exists in the agent position and pick it up to open the door.
- The second skill is navigating inside and between chambers. Option 2 of this task, shown in Figure 23, is suitable for acquiring this skill since this program contains numerous tokens about moving and turning directions.
- The third skill is locating and placing the key in a specific position to open the door. Option 3 of this task, shown in Figure 23, is suitable for acquiring this skill since this program will first place the key and then pick it up.
By combining these skills with the help of the high-level policy controller, the agent can pick up or place keys in the correct places and navigate inside and between chambers.
>Does the choice of DSL affect the performance of the proposed process? Would a less well-designed DSL cause the method to be ineffective?
As pointed out by the reviewer, the choice of DSL plays a vital role in the proposed framework. In this work, we choose Karel DSL because its semantics are designed to describe an agent’s behavior and how agents can observe and interact with the environment. Suboptimal DSLs (e.g., inaccurate tokens to indicate state or behavior) could lead to suboptimal programmatic policies. We will incorporate this discussion into the limitation section (Section J) in the paper revision.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response and additional experiments. The discussions and potential adjustments sound reasonable to me. I will keep my score. | Summary: Utilizing human-readable programs as policies has been recently proposed to enhance interpretability in reinforcement learning. This work introduces the Hierarchical Programmatic Option Framework (HiPO) that first embeds the programs into a smooth and continuously parametrized space, obtains a diverse and compatible skill set by applying the cross entropy method on the embedding space with respect to novel diversity and compatibility metrics, and finally learning a high-level policy upon the skill set. Experimental analysis of HiPO on the Karel problem sets manifests its effectiveness and zero-shot generalizability.
Strengths: The authors proposed novel reward functions that can enhance the diversity and compatibility of the program skill set obtained by CEM. The experimental results show that HiPO performs better on average compared to existing baselines, and every part of HiPO plays a significant role.
Weaknesses: 1. The execution time will differ from program to program, but HiPO does not consider this factor while incorporating a discount factor $\gamma$ of 0.99. The adoption of SMDP should be considered to deal with the variance in execution time appropriately. Also, HiPO does not discount the cumulative reward of low-level actions, further deepening the inconsistency between the theoretical objective and the actual loss function. The authors should justify such design choices.
2. In Section 5.3, the difference between the two settings, CEM+diversity top k and CEM+diversity x |M|, is unclear. Further explanation would be helpful for the readers.
3. The authors did not conduct experiments on the CEM + compatibility setting. In the Karel-Long environments, most of the improvement seems to come from the compatibility heuristic.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Figure 6 shows how the adoption of the diversity factor spreads the skill set over the embedding space. However, the diversity of the embedding vectors does not necessarily guarantee the diversity in the actual behaviour. How different are the programs' actual behaviors in the final skill set?
2. From Section A.2, it seems like at every step, the procedure selects one program through CEM. Wouldn't this approach cause the programs selected in the earlier stages to be sub-optimal?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
>The execution time will differ from program to program, but HiPO does not consider this factor while incorporating a discount factor 𝛾 of 0.99.
During the option retrieval process, the normalized reward defined in Equation 1 explicitly considers the execution time of each program in the sampled option sequence through the discount factor 𝛾 (i.e., distant reward will be discounted more than instant reward in each program execution). Therefore, the evaluation function $G$ introduced in Section 4.2.3 not only assesses the diversity and compatibility among different options, but also considers the variance of execution time of each option across lists of options sampled by the evaluation function. We will revise the paper to make it clear.
>HiPO does not discount the cumulative reward of low-level actions
After retrieving a set of options, each option can be viewed as a “macro action” by the high-level policy, and executing each macro action takes a “macro timestep” (i.e., the beginning of the episode or after fully executing the selected option), as depicted in Figure 2(b). Since these options are fixed and deterministic sub-policies in sMDP, we use the cumulative reward of all the low-level actions from a program execution as the return of each macro action. This aligns with existing hierarchical RL literature [1,2,3].
>The difference between the two settings, CEM+diversity top k and CEM+diversity x |M|, is unclear.
In CEM+diversity top k (k=|M|), we select the top k=|M| programs out of a total of N=10 CEM searches. On the other hand, CEM+diversity x |M| conducts |M| rounds of N=10 CEM search (a total of 10 x |M| CEM searches) and selects the best program in each round to get a total of |M| programs.
Note that when conducting the j-th CEM during the i-th CEM+diversity of CEM+diversity x |M|, the diversity multiplier is calculated based on the previous (i-1) retrieved programs and the current j programs. In comparison, the j-th CEM during CEM+diversity top k only considers the diversity among these j programs. We will revise the paper to make it clear.
>The authors did not conduct experiments on the CEM + compatibility setting.
As suggested by the reviewer, we conduct additional experiments of CEM+compatibility ×|M|.
| Method | Seesaw | UP-N-Down | Farmer | Inf-DoorKey | Inf-Harvester |
|-|-|-|-|-|-|
| CEM × \|M\| | 0.06 $\pm$ 0.10 | 0.39 $\pm$ 0.36 | 0.03 $\pm$ 0.00 | 0.11 $\pm$ 0.14 | 0.41$\pm$ 0.17 |
| CEM+diversity top k, k=\|M\|| 0.15 $\pm$ 0.21 | 0.25 $\pm$ 0.35 | 0.03 $\pm$ 0.00 | 0.13 $\pm$ 0.16 | 0.42$\pm$ 0.19 |
| CEM+diversity × \|M\|| **0.28** $\pm$ 0.23 | **0.58** $\pm$ 0.31 | 0.03 $\pm$ 0.00 | 0.36 $\pm$ 0.26 | 0.47$\pm$ 0.23 |
| CEM+compatibility × \|M\| | 0.23 $\pm$ 0.32 | 0.43 $\pm$ 0.34 | **0.14** $\pm$ 0.22 | **0.57** $\pm$ 0.3 | **0.66** $\pm$ 0.08 |
Across tasks listed above, CEM+compatibility × |M| performs better than CEM × |M|, showing the effectiveness of the compatibility measure on its own. For multi-stage tasks like Farmer and Inf-DoorKey, CEM+compatibility × |M| performs better than CEM+diversity top $k$ and CEM+diversity × |M|, indicating that the compatibility measure can help CEM more easily find skills suitable for multi-stages compared to the diversity measure. We will revise the paper to include this experiment.
>The diversity of the embedding vectors does not necessarily guarantee the diversity in the actual behaviour.
We learn a program embedding space following Trivedi et al. [4]. The objective function features a latent behavior reconstruction loss, which aims to ensure that in the learned embedding space, similar behaviors are encoded closer and drastically different behaviors are encoded far from each other.
>How different are the programs' actual behaviors in the final skill set?
Figures 21, 22, 23, and 24 present the retrieved options using our method. Taking Inf-DoorKey as an example, this task requires iterating between three skills:
- Picking up the key in a chamber to open the door. Option 1 in Figure 23 is suitable for acquiring the skill since this program will check whether a key exists in the agent position and pick it up to open the door.
- Navigating inside and between chambers. Option 2 in Figure 23 is suitable for acquiring the skill since this program contains numerous tokens about moving and turning directions.
- Locating and placing the key in a specific position to open the door. Option 3 in Figure 23 is suitable for acquiring the skill since this program will first place the key and then pick it up.
>From Section A.2, it seems like at every step, the procedure selects one program through CEM. Wouldn't this approach cause the programs selected in the earlier stages to be sub-optimal?
The programs retrieved in the earlier stages could be suboptimal, which motivates us to design the diversity and compatibility rewards to ensure the performance of the retrieved programs as a whole. Despite the potential suboptimal retrieved programs, the experimental results in Sections 5.2, 5.3, and 5.4 show that the high-level policy can effectively reuse programmatic options selected by the proposed procedure, outperforming the baselines.
---
References:
[1] Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning. In NeurIPS, 2018.
[2] Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared hierarchies. In ICLR, 2018.
[3] Youngwoon Lee, Jingyun Yang, and Joseph J. Lim. Learning to coordinate manipulation skills via skill behavior diversification. In ICLR, 2020.
[4] Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, and Joseph J Lim. Learning to synthesize programs as interpretable and generalizable policies. In NeurIPS, 2021.
---
Rebuttal 2:
Title: Reminder: The reviewer-author discussion period ends in three days
Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns raised by the reviewer, including the following points.
- A discussion of discount factors in our hierarchical RL setup
- A clarification of the two settings, CEM+diversity top k and CEM+diversity x |M|
- Additional results of CEM + compatibility
- An explanation of diversity in program embeddings and behaviors
- A discussion of suboptimal retrieved programs
Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.
---
Rebuttal Comment 2.1:
Comment: I'm very sorry for the late reply. Here are some clarifications I want to make.
Unlike [1, 2, 3], where the length of a high-level action is fixed, HIPO deals with variable-length high-level actions. This can be problematic in certain situations. Consider the following two sequences of rewards:
1) (0, 1, 0, 1, 0, 1), (0, 1)
2) (0, 1), (0, 1), (0, 1), (0, 1.01)
where rewards from the same macro-action are grouped by parentheses. Under the return computation scheme used by HIPO, sequence 1 results in $4+0.99\times 1=4.99$ and sequence 2 results in $1+0.99\times 1+0.99^2\times 1+0+0.99^3\times 1.01\approx 3.95$. Even though sequence 2 is better, HIPO will prefer the first one.
---
Reply to Comment 2.1.1:
Title: Re: Official Comment by Reviewer p5Z4
Comment: We thank the reviewer for the question with further clarification.
> Unlike [1, 2, 3], where the length of a high-level action is fixed, HIPO deals with variable-length high-level actions. This can be problematic in certain situations. Consider the following two sequences of rewards ...
Due to the temporal abstraction brought by the hierarchical structure, the high-level policy evaluates the effectiveness (i.e., cumulative reward) of each macro action no matter how long or short each macro action is. That said, we consider learning the high-level policy to be a standard RL problem. As pointed out by the reviewer, with a discount factor $\gamma<1$, the high-level policy prefers immediate rewards to delayed rewards. This has been proven to be effective and lead to better convergence, and is widely adopted in most RL algorithms [1-4] with or without a hierarchical policy structure. Note that such a setting is also adopted by [5], whose low-level policy (transition policy) also has varying horizons.
On the other hand, the preference of the high-level policy can be adjusted under different values of $\gamma$. For example, if the value of $\gamma$ is raised from $0.99$ to $0.999$, the discounted return of sequence 1 in the example above is $3 + 0.999 \times 1 = 3.999$, and the discounted return of sequence 2 is $1 + 0.999 \times 1 + 0.999^2 \times 1 + 0.999^3 \times 1.01 \approx 4.003974$. With this adjustment, the high-level policy prefers sequence 2. Hence, the desired value of the discount factor can vary from one task to another.
We thank the reviewer for the fruitful discussion and will revise the paper to include it. Also, we hope our initial rebuttal sufficiently addresses other questions raised in your initial review. Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit.
[1] Nachum et al., "Data-efficient hierarchical reinforcement learning." In NeurIPS, 2018.
[2] Frans et al., "Meta learning shared hierarchies." In ICLR, 2018.
[3] Lee et al., "Learning to coordinate manipulation skills via skill behavior diversification." In ICLR, 2020.
[4] Schulman et al., "Proximal policy optimization algorithms." 2017.
[5] Lee et al., "Composing Complex Skills by Learning Transition Policies." In ICLR, 2019. | Summary: The authors present HIPO, a method that use a Program embedding space to create options. Some are retrieve if diverse and efficient to create a set of option, later used by a learned high level policy. To evaluate their framework, as the KAREL benchmark does not include long and repetitive tasks, they introduce KAREL-LONG and evaluate their method on both benchmarks. Their result show that HIPO is a viable hierarchical RL method.
Strengths: The paper is sound and clear. The structure is clear, the contributions easily identifiable, the figures, table are easily understandable.
I am not a deep expert of option or program synthesis, but the paper is so clear that it brought me sufficient insight to situate it and understand its method. The KAREL-LONG test-bench is well motivated, as one can see that the original framework is not enough to differentiate its existing SOTA methods.
Their evaluation shows that HIPO helps with long and repetitive tasks, while being the strongest competitor on the existing KAREL domain.
I'm already giving an accept, but might further raise based on the other reviews and the rebuttal.
Weaknesses: The only main weakness I was able to identify is the:
**Lack of limitation section**. A dedicated *Limitation* section could clearly help situate the advantages and drawback of HIPO, and compare it to other methods. Authors could step a little bit outside programmatic and look into e.g. interpretable logic-based RL [1, 2, 9, 10], interpretable RL methods that use LLM [3], which can be useful for programs synthesis, tree-based policies (convertible to tree programs)[8, 5, 6, 7], ... etc. I think the "background" part of current related work section could be included in the *Introduction*, and that a dedicated *Limitation and related work* section could include such a discussion. It would situate the method in the broader literature, and bring attention to the nice HIPO method to these communities as well. I give some pointers references, but some more could be searched.
Further smaller concerns are:
**Interpretability allows to detect invisible suboptimal behaviors.** It has been shown that non-interpretable (e.g. deep) RL methods often learn misaligned policies [4, 5, 6], without this problem being spotted, on tasks as simple as Pong [5]. This is a much stronger argument to directly place in the introduction than the lack of transparency and trust. I would start off using such an argument, highlighting the need of e.g. readable, interpretable programs.
**Authors could slightly modify your method name (if they wish).** Your methods' name might collide with *Hierarchical Proximal Policy Optimization (HIPPO)*, an existing known option learning framework.
-------------------------------
[1] Jiang, Zhengyao, and Shan Luo. "Neural logic reinforcement learning." International conference on machine learning. PMLR, 2019.
[2] Xu, Duo, and Faramarz Fekri. "Interpretable model-based hierarchical reinforcement learning using inductive logic programming." arXiv preprint arXiv:2106.11417 (2021).
[3] Luo, Lirui, et al. "INSIGHT: End-to-End Neuro-Symbolic Visual Reinforcement Learning with Language Explanations." arXiv preprint arXiv:2403.12451 (2024).
[4] Di Langosco, Lauro Langosco, et al. "Goal misgeneralization in deep reinforcement learning." International Conference on Machine Learning. PMLR, 2022.
[5] Delfosse, Quentin, et al. "Interpretable concept bottlenecks to align reinforcement learning agents." arXiv preprint arXiv:2401.05821 (2024).
[6] Kohler, Hector, et al. "Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning." arXiv preprint arXiv:2405.14956 (2024).
[7] Delfosse, Quentin, et al. "HackAtari: Atari Learning Environments for Robust and Continual Reinforcement Learning." arXiv preprint arXiv:2406.03997 (2024).
[8] Bastani, Osbert, Yewen Pu, and Armando Solar-Lezama. "Verifiable reinforcement learning via policy extraction." Advances in neural information processing systems 31 (2018).
[9] Delfosse, Quentin, et al. "Interpretable and explainable logical policies via neurally guided symbolic abstraction." Advances in Neural Information Processing Systems 36 (2024).
[10] Ma, Z., Zhuang, Y., Weng, P., Zhuo, H. H., Li, D., Liu, W., & Hao, J. (2021). Learning symbolic rules for interpretable deep reinforcement learning. arXiv preprint arXiv:2103.08228.
Technical Quality: 4
Clarity: 4
Questions for Authors: * Are the KAREL-LONG environment containing training and testing environments ? As programs are compared, it might be insightful to test the generalizability of the different methods (by varying size, of the environment).
* I am not sure if the option is returning a terminal token or if the high level controller is reselecting an option at each timestep. This could be made clearer.
* What's the percentage of retrieved program (based on your effectiveness and diversity criterion) ?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: This constitutes my main concern, I have not been able to clearly situate a (broader discussion) on the limitations. Again, I would create such a section, bringing most of the related work in it, as an opportunity to criticize (both positively and negatively) HIPO, and other approaches.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
>A dedicated Limitation section could clearly help situate the advantages and drawback of HIPO, and compare it to other methods. Authors could step a little bit outside programmatic and look into e.g. interpretable logic-based RL [1, 2, 9, 10], interpretable RL methods that use LLM [3], tree-based policies [8, 5, 6, 7], ... etc.
We thank the reviewer for providing the references and suggestions. We will revise the paper to discuss these works by adding a dedicated section or merging it with Section J, which points out the challenges of DSL design and the interpretability-performance tradeoff.
>Interpretability allows to detect invisible suboptimal behaviors
We thank the reviewer for this insight. We totally agree that interpretability can help detect misaligned policies. We will revise the paper to incorporate this discussion.
>Authors could slightly modify your method name (if they wish). Your methods' name might collide with Hierarchical Proximal Policy Optimization (HIPPO), an existing known option learning framework.
We thank the reviewer for the suggestion. We will try to adjust the name of our method to avoid potential confusion.
>It might be insightful to test the generalizability of the different methods (by varying size, of the environment).
In Section 5.6, we evaluate the ability to inductive generalization in testing Karel-Long environments with longer horizons than training environments. Table 4(b) indicates that HIPO generalizes better to the testing environments with significantly extended horizons compared to the baselines.
We will add other different generalization settings (e.g., varying size of the environment) to further testify the proposed framework in the revision.
>I am not sure if the option is returning a terminal token or if the high level controller is reselecting an option at each timestep.
As detailed in Figure 2 and Section 4.3, the high-level controller will reselect an option after the previously selected programmatic option is executed and terminated. After executing the selected programmatic option, the high-level policy will receive the execution trace (i.e., a series of (state, action, reward) tuples) collected during the program execution. Based on the final state in the execution trace and the current selected programmatic option, the high-level policy will choose the next programmatic option. We will revise the paper to make this clear.
>What's the percentage of retrieved program (based on your effectiveness and diversity criterion)?
The theoretical search space of each programmatic option is about $30^{40}$, i.e., the token space size to the power of program length. Throughout the experiments conducted in this paper, we sample approximately 50000 unique programs according to our effectiveness and diversity criteria to solve a given task, which only takes up a small fraction of the total program space; therefore, an effective retrieval method is essential.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications
Comment: I want to thank the authors for their clarifications.
I want to insist on the fact that they do not have to change their method name, I just gave an advice, but the choice is theirs.
If the last point above has not been discussed in the paper, I would advise adding it as well.
Overall, I hope that the promised modifications will be incorporated in the manuscript, and hope to see this paper presented at the conference.
---
Reply to Comment 1.1.1:
Title: Re: Thank you for the clarifications
Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We will definitely revise the paper to incorporate all the modifications that were promised. | Summary: The paper presents HIPO, a method for learning programmatic options for solving problems with long-horizon and repetitive tasks. First, HIPO searches in the space defined by a domain-specific language for a set of diverse programs. This set of programs is generated while accounting for the diversity and composability of the programs. Finally, these programs are used as options for a neural policy. HIPO was evaluated on instances of Karel the Robot, where it was shown to be more sample efficient than Deep RL and other programmatic methods. The paper also includes a carefully designed ablation study on HIPO's components.
Strengths: The idea of using programmatic representations to learn options is interesting and valuable. The authors correctly state that using programmatic policies as options as opposed to policies means that one sacrifices, to some extent, the interpretability of the policies in favor of performance.
Another strength of HIPO is its natural combination of programmatic and neural representations -- this is not mentioned in the paper and perhaps it should! The repetitive behavior required to solve the tasks is given by the programmatic options, which can achieve this through the strong inductive bias the Karel language offers. The neural policy is then responsible for orchestrating these different components, which would clearly be difficult through program synthesis.
I am particularly impressed by the results on the DoorKey domain. This is such a difficult domain for program synthesis and DRL. HIPO managed to achieve a really high score in that domain. I do have some clarifying questions about this result (please see them in the Questions section of this review).
I also appreciated the extra effort the authors put into translating the neural model that orchestrates the options into automata for interpretability. Since the automata can also be seen as programs, it would be interesting to see their performance or even their translation into the Karel language.
Weaknesses: The paper has a few weaknesses too. It would be helpful if the authors managed to fix the following for the camera ready, if the paper is accepted.
1. HIPO represents a method for learning programmatic options, but it includes no option-learning baseline.
2. Searching directly in the programmatic space was recently shown to outperform the latent search used in HIPO. It would be valuable to include this type of search as a baseline too.
3. It would be interesting to measure sample efficiency in terms of samples instead of programs. Intuitively, some programs could run much longer and be more costly in real settings. This would also allow for a comparison with DRL in terms of sample complexity. In general, I find learning curves more informative than the tables presented in the paper, which show only asymptotic performance.
4. It would be valuable to have more runs of the experiments >30 and show confidence intervals instead of standard deviation, as CI is easier to draw conclusions from without statistical tests.
Overall, in my opinion, the strengths of the paper outweigh its weaknesses. This is because the paper shows a different way of using programmatic representations that the community should further explore.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please feel free to comment on the points listed under weaknesses. In addition to them, I will add the following questions and comments.
1. I am puzzled by the outstanding performance of HIPO in DoorKey. The way that the options are selected is that they have to be diverse and they have to maximize the agent's return. Since the options are selected as a pre-processing step, before orchestrating them, the programs must be collecting the key and receiving a positive reward, all of them. Once they are orchestrated, they happen to contain the behavior needed to find the door and collect the marker after the key is found. Is it because the composability step is already finding a solution that stitches together options that are able to solve the problem? If that is the case, would it be valuable to add a baseline that performs diversity+composability and returns the best order used in the composability step?
2. Why does HIPO use an option that terminates the episode? This doesn't make sense to me. The episode finishes whenever it finishes. That is, it is not up to the agent to decide when the episode finishes. This is a property of the MDP.
3. The caption of Figure 3 says that the Harvester goes from 36 to 400 markers as we move from Harvester to Inf-Harvester. Due to the structure of the problem, isn't it possible to find a short program that is able to collect an arbitrary number of markers?
4. I would like to challenge the desiderata of the options (lines 136-139). First, an option doesn't have to be effective in the sense that it needs to "obtain some task reward." It just needs to be part of the solution. For example, in the DoorKey problem, if the agent only received a positive reward once the problem was solved, an option that collects the key would still be helpful. Second, I suspect that the property of compatibility is only needed because the neural agent doesn't have access to the primitive actions while orchestrating the options. If the agent had access to them, it would be able to combine any "helpful option" by using primitive actions in between them. However, I understand that even if the agent had access to primitive actions, it could still find value in compatibility. In this case, compatibility would be playing the effectiveness property, and it would be looking at ways of how different programs can be combined to maximize the agent's return.
5. How can the diversity score be defined for searching directly in the programmatic space? Could one use the losses used to train the latent space to define diversity? Or is the diversity score only possible because of the properties of the latent space?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper has a limitations section in the appendix that addresses some important limitations, such as the trade-off between interpretability and performance. I would also add the points raised in the Weaknesses section of this review and the use of a single domain in the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
> Option-learning baselines
As suggested by the reviewer, we additionally experimented with the option-critic architecture [1] and reported the comparison to our method below.
| Method | Seesaw | UP-N-Down | Farmer | Inf-DoorKey | Inf-Harvester |
|-|-|-|-|-|-|
| Option-critic | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.47 $\pm$ 0.01 |
| HIPO (Ours) | **0.53** $\pm$ 0.10 | **0.76** $\pm$ 0.02 | **0.62** $\pm$ 0.02 | **0.66** $\pm$ 0.07 | **0.79** $\pm$ 0.02 |
The results show that our method outperforms option-critic on all the Karel-Long tasks. Option-critic performs poorly on all tasks except for Inf-Harvester, likely because of the sparse rewards nature of these tasks and the environment's per-action cost, possibly forcing the options to be trained to terminate quickly. We will revise the paper to include this result.
> Search in the programmatic space
As suggested by the reviewer, we conduct additional experiments that search directly in the programmatic space using the hill climbing (HC) approach proposed by Carvalho et al. [2].
| Method | Seesaw | UP-N-Down | Farmer | Inf-DoorKey | Inf-Harvester |
|-|-|-|-|-|-|
| HC | 0.22 $\pm$ 0.08 | 0.63 $\pm$ 0.26 | 0.19 $\pm$ 0.03 | 0.14 $\pm$ 0.16 | **0.88** $\pm$ 0.00 |
| HIPO (Ours) | **0.53** $\pm$ 0.10 | **0.76** $\pm$ 0.02 | **0.62** $\pm$ 0.02 | **0.66** $\pm$ 0.07 | 0.79 $\pm$ 0.02 |
The results show that searching directly in the programmatic space can achieve better performance on tasks with denser rewards, e.g., Inf-Harvester. On the other hand, HIPO performs better on sparse-reward tasks requiring diverse skills. We will revise the paper to include this baseline.
> Environment-step sample efficiency
We thank the reviewer for the suggestion. We will add this plot to the revised paper.
> More runs of the experiments >30 and confidence intervals
We thank the reviewer for the suggestion. We presented standard deviations with five random seeds following the standard practice in RL literature. We will add more runs and show confidence intervals in the revision.
> Performance of HIPO in DoorKey
The diversity multiplier encourages exploring programs of different behaviors during the search process. Therefore, after retrieving the program that "find the key" as the first option, other behaviors like "navigating" or "put maker" will be more likely to be retrieved as the second option.
Due to the consideration among the options, it is possible to sample some option execution orders that are able to solve the problem following the evaluation function defined in Section 4.2.3. If one program can be stitched together with retrieved options in some specific order to solve the program, then the evaluation function will return a high score for this new option.
> Best order used in the composability step
As suggested by the reviewer, we report the return obtained by executing the best random sequence order found in the compatibility step during the search of CEM+diversity+compatibility.
| Method | DoorKey | Seesaw | UP-N-Down | Farmer | Inf-DoorKey | Inf-Harvester |
|-|-|-|-|-|-|-|
| Best Random Sequence | **1.00** $\pm$ 0.00 | 0.04 $\pm$ 0.02 | 0.17 $\pm$ 0.08 | 0.05 $\pm$ 0.03 | 0.06 $\pm$ 0.06 | 0.60 $\pm$ 0.02 |
| HIPO (Ours) | **1.00** $\pm$ 0.00 | **0.53** $\pm$ 0.10 | **0.76** $\pm$ 0.02 | **0.62** $\pm$ 0.02 | **0.66** $\pm$ 0.07 | **0.79** $\pm$ 0.02 |
The results show that, without the high-level policy, executing options with a specific sequence can already achieve a good performance on tasks with short-horizon or denser reward tasks requiring fewer skills, e.g., DoorKey and Inf-Harvester. On the other hand, HIPO performs better on long-horizon and sparse-reward tasks, indicating the necessity of integrating the high-level policy with the option searching process when solving repetitive and long tasks.
> Termination option
We implement the termination option as a “do nothing” low-level policy, i.e., an empty program, that triggers null action to avoid the per-action cost as described in Section H, which urges the agent to solve the tasks as efficiently as possible by reducing redundant actions. That said, as pointed out by the reviewer, the MDP decides when to terminate, and the agent can learn to take null action to avoid action costs.
> A short program that is able to collect an arbitrary number of markers
Yes, it is possible to find a short program that collects an arbitrary number of markers in the Inf-Harvester task. Despite the long-horizon nature of this task, it only requires repeatedly traversing and picking markers, which can be easily encapsulated by short programs with a small number of primitive actions inside an outer WHILE control statement.
> I would like to challenge the desiderata of the options (lines 136-139).
We thank the reviewer for the insight. We will remove “(i.e., obtain some task rewards)” from the definition of effectiveness.
> Diversity score for programmatic spaces
We think it is not trivial to define the diversity score in a programmatic space. Specifically, taking the programmatic space defined by abstract syntax trees (ASTs) proposed in Carvalho et al. [2] as an example, it is not clear how we can compute the distance between a pair of ASTs. Exploring how the diversity score can be defined in programmatic spaces is an interesting future direction.
---
References:
[1] Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Association for the Advancement of Artificial Intelligence, 2017.
[2] Tales Henrique Carvalho, Kenneth Tjhia, and Levi Lelis. Reclaiming the source of programmatic policies: Programmatic versus latent spaces. In ICLR, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering all my questions, I appreciate the effort. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Target-Guided Adversarial Point Cloud Transformer Towards Recognition Against Real-world Corruptions | Accept (poster) | Summary: This paper introduces a framework consisting of two modules: the Adversarial Significance Identifier, which selects tokens with high importance, and the Target Guided Prompter, which selectively drops important tokens to achieve more generalized performance. This approach aims to mitigate the overfitting on specific pattern problem.
Strengths: S1. The idea of digging sub-optimal patterns which could contribute to the final performance is interesting.
S2. The framework seems like a general idea which could be applied to other method as supplemental modules.
Weaknesses: W1. The presentation remains to be improved
W2. The experiment is not strong enough
Technical Quality: 3
Clarity: 2
Questions for Authors: O1. The term "local pattern" is ambiguous and lacks a formal definition. Does it refer to a graph constructed from the Point Cloud set tokens or the importance ranking of the tokens? The paper should provide a clear and formal definition of "local pattern."
Regarding Figure 1, the confusion matrices in parts (a) and (b) appear indistinguishable to the reviewer. Clarification on their differences would be beneficial.
The abstract is difficult to understand, particularly in describing the two modules. It requires significant improvement for readability. The description in lines 42-52 is much clearer than the abstract and could serve as a model for revision.
O2. The paper employs the mini-pointnet method to generate Point Cloud tokens. However, there are other point cloud tokenizing methods available. The paper should evaluate the adversarial mechanism using different tokenization methods to validate its robustness.
O3. The proposed framework appears general and potentially applicable to various methods, which is a positive aspect. Can this method be applied to other state-of-the-art (SOTA) methods by incorporating the "digging sub-optimal patterns" mechanism? If so, the reviewer recommends including experiments demonstrating the broader applicability of the proposed method.
O4. The notations used in the focal tokens identification section are unclear. The paper uses m = 1, ..., D when introducing the F_topk, which is a R^{k \times C} matrix. Is D equivalent to C? The reviewer did not find a definition for D. Additionally, M in equation (2) is introduced as a vector rather than a matrix. The paper should revise the notation for clarity.
The process for selecting the top-k tokens is not adequately described. Detailed explanation on how these tokens are chosen should be included.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper claims the limitation of this paper is that the optimal utilization is remaining unexplored. The authors claim that they will address the issue in the future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed and insightful reviews. We hope our response can address your concerns.
>**Q1.1: The term "local pattern" is ambiguous and lacks a formal definition. Does it refer to a graph constructed from the Point Cloud set tokens or the importance ranking of the tokens? The paper should provide a clear and formal definition of "local pattern.".**
Sorry for the confusion about the definition. **Local pattern** refers to the geometric structure of a small region within the point cloud, which is captured by a subset of tokens. The clear and formal definition will be included in the final manuscript.
>**Q1.2: Regarding Figure 1, the confusion matrices in parts (a) and (b) appear indistinguishable to the reviewer. Clarification on their differences would be beneficial.**
Sorry for the confusion. We have provided a visualization in the **authors' response document** that highlights the differences more clearly. The dominant red along the diagonal underscores our approach's superior performance compared to the standard transformer. We will incorporate it in the final manuscript to ensure the distinctions are clear.
>**Q1.3: The abstract is difficult to understand, particularly in describing the two modules. It requires significant improvement for readability. The description in lines 42-52 is much clearer than the abstract and could serve as a model for revision**
Sorry for the confusion about the abstract. As you suggested, we have carefully revised the abstract for readability. The revised abstract will be included in the final manuscript.
>**Q2: The paper employs the mini-pointnet method to generate Point Cloud tokens. The paper should evaluate the adversarial mechanism using different tokenization methods to validate its robustness.**
Thanks for your advice! As you suggested, we compare our method with two alternative tokenization methods: mini-DGCNN and mini-PCT. The results are shown in the table below.
As we can see from the results, the performance of our adversarial mechanism is relatively stable across different tokenization methods. This indicates that our method is not sensitive to the specific tokenization method used and can effectively improve the robustness of point cloud models. Additionally, we would like to mention that the choice of tokenization method may affect the performance of the model on different tasks and datasets. Therefore, we recommend exploring different tokenization methods and choosing the most suitable one for specific applications. We will add the results in the final version.
| Methods | mCE(%, $\downarrow$) |
|----------------|--------|
| mini-DGCNN | 71.1 |
| mini-PCT | 72.4 |
| mini-PointNet (Ours) | 72.2 |
>**Q3: The proposed framework appears general and potentially applicable to various methods, which is a positive aspect. Can this method be applied to other state-of-the-art (SOTA) methods by incorporating the "digging sub-optimal patterns" mechanism?**
Thank you for your suggestion! As you suggested, we have extended the 'digging sub-optimal patterns' mechanism to two state-of-the-art (SOTA) methods, PointM2AE and PointGPT, on the ModelNet-C dataset. The results are promising and demonstrate the general applicability of our approach.
As shown in the table below, incorporating our 'digging sub-optimal patterns' mechanism into PointM2AE and PointGPT resulted in significant reduction in mCE scores. These results suggest that our approach can effectively enhance the robustness of various point cloud recognition models. By encouraging the model to explore and utilize a broader range of patterns, our method enables the models to better generalize to corrupted data.
| Methods | mCE(%, $\downarrow$) |
|----------------|--------|
| PointM2AE | 83.9 |
| `digging sub-optimal patterns` | 82.9 ( $\downarrow$ 1.0) |
| PointGPT | 83.4 |
| `digging sub-optimal patterns` | 82.0 ( $\downarrow$ 1.4) |
>**Q4.1: The notations used in the focal tokens identification section are unclear. The paper uses m = 1, ..., D when introducing the F_topk, which is a R^{k \times C} matrix. Is D equivalent to C? The reviewer did not find a definition for D.**
Sorry for the confusion. This is a typo. The variable D in F_topk should indeed be equivalent to C, which represents the number of feature channels in the tokens. We will revise the paper to correct this error and ensure consistency in our notation.
>**Q4.2: Additionally, M in equation (2) is introduced as a vector rather than a matrix.**
You are correct that M in equation (2) is a vector rather than a matrix. We will revise the equation and the corresponding text to reflect this correction.
>**Q4.3: The process for selecting the top-k tokens is not adequately described. Detailed explanation on how these tokens are chosen should be included.**
Sorry for the confusion. We will provide a more detailed explanation of how the focal tokens are selected.
1. **Feature Response Calculation**: For each token, we compute the feature response for each channel, with the help of auxiliary supervisory process. This involves assessing how strongly each token responds in each of the C channels.
2. **Sorting Tokens by Channel Responses**: Once the feature responses are calculated, we sort the tokens based on their response values within each channel. This step ensures that tokens with higher responses are ranked higher.
3. **Selecting Focal Tokens**: After sorting, we select the top k tokens for each channel. This selection is done by choosing the k highest-ranked tokens based on their feature responses in each channel.
By following these steps, we ensure that only the most significant tokens, in terms of their feature responses, are retained for further processing.
For further clarity, we kindly refer you to the **A.6** part of the **Supplementary Material**, where we provide pseudo-code of focal tokens identification process.
---
Rebuttal Comment 1.1:
Comment: The reviewer is satisfied with the rebuttal and has increased the score. Please incorporate the rebuttal content into the final manuscript. Thank you.
---
Reply to Comment 1.1.1:
Comment: Thanks for your satisfaction with our reply! We greatly appreciate your positive evaluation. We will incorporate the additional experiments and improve the paper in the final version. If you have any further concerns or questions, please do not hesitate to reach out. We are committed to addressing any remaining issues promptly and thoroughly. Thank you again for your valuable feedback and best wishes! | Summary: The paper proposes a novel architecture called Target-Guided Adversarial Point Cloud Transformer (APCT) for robust 3D perception in the presence of corrupted data. The APCT integrates an Adversarial Significance Identifier and a Target-guided Promptor to augment global structure capture and enhance the model's resilience against real-world corruption. The paper presents extensive experiments on multiple benchmarks, demonstrating the effectiveness and state-of-the-art performance of the proposed method.
Strengths: * The paper introduces a novel architecture, APCT, that addresses the challenge of robust 3D perception in the presence of corrupted data.
* The APCT integrates an Adversarial Significance Identifier and a Target-guided Promptor, which effectively improve the resilience of point cloud models against various types of corruptions.
* The paper presents extensive experiments on multiple benchmarks, including ModelNet-C and ScanObjectNN-C, demonstrating the effectiveness and state-of-the-art performance of the proposed method.
Weaknesses: * Since data augmentation methods like PointMixUp and PointCutMix can improve the robustness, the experiments should be performed.
* Similar previous works like PointDP are not compared.
* Some typo errors in equation 3.
I would like to increase my score if the authors solve those issues.
Technical Quality: 3
Clarity: 3
Questions for Authors: How about the performance on SOTA MVImageNet dataset?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed and insightful reviews. We hope our response can address your concerns.
>**Q1: Augmentation methods like PointMixUp and PointCutMix can improve the robustness, the experiments should be performed.**
Thanks for your advice! As you suggested, we further evaluate the performance of our approach with various data augmentation methods on ModelNet-C dataset, the results are as follows. Beside the discussed PointMixup [1] and PointCutMix [2], we additionally incorporate experiments with the data augmentation techniques PointWOLF [3], RSMix [4], and WOLFMix [5].
Among these, PointMixup, PointCutMix and RSMix fall under the category of mixing augmentation, where they mix several point clouds following pre-defined regulations. PointWOLF pertains to deformation techniques that non-rigidly deforms local parts of an object. WOLFMix combines both mixing and deformation augmentations, which first deforms the object, and subsequently rigidly mixes the deformed objects together.
As shown in the table, data augmentation methods further improve the robustness of our method against point cloud corruptions. Employing mixing or deformation data augmentation techniques independently can enhance the robustness of the model, *e.g.* the results of our model with PointWOLF (67.0% mCE) and with PointMixup (66.2% mCE). When these two techniques are combined, as in WOLFMix, the robustness of the model is further augmented (64.7% mCE). Additionally, these experiments demonstrate the compatibility of our method with various data augmentation techniques, further underscoring its potential in addressing data corruption. We will add the results in the final version.
| Methods | mCE(%, $\downarrow$) |
|-----------------|--------|
| APCT (Ours) | 72.2 |
| + PointMixup | 66.2 |
| + PointCutMix-R | 69.7 |
| + PointWOLF | 67.0 |
| + RSMix | 71.3 |
| + WOLFMix | 64.7 |
Reference:
[1] Pointmixup: Augmentation for point clouds. ECCV 2020.
[2] Pointcutmix: Regularization strategy for point cloud classification. Neurocomputing 2022.
[3] Point Cloud Augmentation With Weighted Local Transformations. ICCV 2021.
[4] Regularization Strategy for Point Cloud via Rigidly Mixed Sample. CVPR 2021.
[5] Benchmarking and Analyzing Point Cloud Classification under Corruptions. ICML 2022.
>**Q2: Similar previous works like PointDP are not compared.**
Thanks for your advice! The primary objective of our approach is to enhance the model's robustness against real-world corruptions. Consequently, most of our experiments are centered around this goal. The experimental results presented in the paper demonstrate the effectiveness of our method in achieving this objective.
Improving the model's defense against point cloud attacks is a secondary goal. We made significant efforts to include comparisons with relevant methods such as PointDP and IF-Defense. However, we were unfortunately unsuccessful in these attempts.
1) As PointDP is not an open-source model, we were unable to obtain its implementation to evaluate its performance in comparison to our baseline method. Therefore, we could not include a direct comparison within the constraints of this submission.
2) Additionally, due to time limitations, we were unable to conduct experiments on other related method such as IF-Defense in time. However, we are committed to addressing this in the final manuscript and will make every effort to include these comparisons.
However, during rebuttal, we have conducted some extending experiments on **ModelNet40-C** dataset. We kindly refer you to our response to **Q2 of Reviewer K9Vn** for details.
We appreciate your understanding and will strive to improve our manuscript based on your valuable feedback.
>**Q3: Some typo errors in equation 3.**
Sorry for the typo errors. We have carefully revised the paper to fix all typos.
>**Q4: How about the performance on SOTA MVImageNet dataset?**
Thanks for your advice! It is very necessary to evaluate APCT in more challenging MVImageNet [6] dataset.
In our paper, we have experimented on five datasets of different tasks, i.e., **ModelNet-C** and **ScanObjectNN-C** (classification against corruption), **ShapeNet-C** (part segmentation against corruption), **ScanObectNN** (classification), **ModelNet** (attack defense). On different benchmarks with various domains, our APCT can attain competitive performance to existing specialist models.
As you suggested, we further evaluate the performance of our approach on one additional dataset **Mvimgnet** [6]. It is a challenging benchmark for real-world point cloud classification, which contains 64,000 training and 16,000 testing samples. As shown in the table, our approach achieves 86.6 OA and still exhibits good generalization capacity in real-world scenarios.
| Methods | OA |
|----------------|--------|
| PointNet | 70.7 |
| PointNet++ | 79.2 |
| DGCNN | 86.5 |
| PAConv | 83.4 |
| PointMLP | 88.9 |
| APCT (Ours) | 86.6 |
Reference:
[6] Mvimgnet: A large-scale dataset of multi-view images. CVPR 2023.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the rebuttal and thus keep my rate unchanged.
---
Reply to Comment 1.1.1:
Comment: Thanks for your satisfaction with our reply! We greatly appreciate your positive evaluation. We will incorporate the additional experiments and improve the paper in the final version. If you have any further concerns or questions, please do not hesitate to reach out. We are committed to addressing any remaining issues promptly and thoroughly. Thank you again for your valuable feedback and best wishes! | Summary: The paper introduces a novel architecture called the Adversarial Point Cloud Transformer (APCT). This model aims to enhance the robustness of 3D perception models against real-world corruptions. The APCT integrates two core components: the Adversarial Significance Identifier and the Target-guided Promptor. The Adversarial Significance Identifier identifies significant tokens by analyzing global context, while the Target-guided Promptor focuses the model's attention on less dominant tokens, effectively broadening the range of patterns the model learns. Extensive experiments demonstrate that APCT achieves state-of-the-art results on multiple corruption benchmarks, proving its effectiveness in handling various types of data corruptions.
Strengths: The paper introduces a novel approach by combining adversarial training with point cloud transformers.
The experiments are comprehensive and robust, demonstrating the effectiveness of the proposed method across various corruption scenarios.
The paper is well-written, with clear and detailed explanations and a logical flow. Visual aids effectively support the textual content.
The research addresses a critical issue in 3D point cloud recognition, providing valuable insights and practical solutions that can be applied in real-world scenarios.
Weaknesses: The paper could explore additional complex corruption scenarios beyond those covered.
The impact of the proposed method on computational overhead is not thoroughly discussed, which could be important for practical implementations.
While the method is validated on several datasets, more diverse and larger-scale datasets could further strengthen the findings [a].
[a] Benchmarking and Improving Robustness of 3D Point Cloud Recognition against Common Corruptions
Technical Quality: 3
Clarity: 3
Questions for Authors: I would like to see results on the mentioned ModelNet40-C dataset
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed and insightful reviews. We hope our response can address your concerns.
>**Q1: Disscussion about the impact of the proposed method on computational overhead could be important for practical implementations.**
Thanks for your advice! It is very necessary to discuss the computational overhead of the proposed method. As you suggested, we discuss the impact of the proposed method on computational overhead as follows, including memory, training and inference speed. Experiments are conducted on one GeForce RTX 3090.
As seen in the table, our method incurs no additional memory overhead. When compared with the baseline technique, our method brings a slight decrease in training speed (~ 6% delay) and inference speed (~ 6% delay), while delivers a significant 4.0% mCE reduction.
| Method | Memory (G) | Train speed (samples/s) | Infer speed (samples/s) | mCE (%, $\downarrow$) |
|-----------------------|------------|-------------------------|-------------------------|---------|
| Baseline | 10.9 | 415.2 | 1111.7 | 76.2 |
| +Ours | 10.9 | 383.4 | 1045.8 | **72.2**|
| Δ | - | ↓~6% | ↓~6% | ↓4.0|
>**Q2: Results on the mentioned ModelNet40-C dataset.**
Thanks for your advice! As you suggested, we further evaluate the performance of our approach on **ModelNet40-C** [1] dataset. It is a comprehensive benchmark on 3D point cloud corruption robustness, consisting of 15 common and realistic corruptions.
As shown in the table, our approach exhibits remarkable robustness on the ModelNet40-C dataset, it beats the superior PCT by 1.4 and achieves the ER_cor of 24.1. The results on ModelNet40-C demonstrate that our APCT has excellent robustness to various point cloud corruptions. We will add the results in the final version.
| Model | ER_cor ↓ | Occlusion | LiDAR | Density Inc. | Density Dec. | Cutout | Uniform | Gaussian | Impulse | Upsampling | Background | Rotation | Shear | FFD | RBF | Inv. RBF |
|-------------|--------|-----------|-------|--------------|--------------|--------|---------|----------|---------|------------|------------|----------|-------|------|------|----------|
| PointNet | 28.3 | 52.3 | 54.9 | 10.5 | 11.6 | 12.0 | 12.4 | 14.4 | 29.1 | 14.0 | 93.6 | 36.8 | 25.4 | 21.3 | 18.6 | 17.8 |
| PointNet++ | 23.6 | 54.7 | 66.5 | 16.0 | 10.0 | 10.7 | 20.4 | 16.4 | 35.1 | 17.2 | 18.6 | 27.6 | 13.4 | 15.2 | 16.4 | 15.4 |
| DGCNN | 25.9 | 59.2 | 81.0 | 14.1 | 17.3 | 15.4 | 14.6 | 16.6 | 24.9 | 19.1 | 53.1 | 19.1 | 12.1 | 13.1 | 14.5 | 14.0 |
| RSCNN | 26.2 | 51.8 | 68.4 | 16.8 | 13.2 | 13.8 | 24.6 | 18.3 | 46.2 | 20.1 | 18.3 | 29.2 | 17.0 | 18.1 | 19.2 | 18.6 |
| PCT | 25.5 | 56.6 | 76.7 | 11.8 | 14.3 | 14.5 | 12.1 | 13.9 | 39.1 | 17.4 | 57.9 | 18.1 | 11.5 | 12.4 | 13.0 | 12.6 |
| SimpleView | 27.2 | 55.5 | 82.2 | 13.7 | 17.2 | 20.1 | 14.5 | 14.2 | 24.6 | 17.7 | 46.8 | 30.7 | 18.5 | 17.0 | 17.9 | 17.2 |
| Ours | 24.1 | 54.9 | 54.7 | 11.7 | 12.9 | 14.2 | 12.1 | 12.6 | 26.3 | 13.4 | 80.6 | 18.3 | 12.1 | 13.0 | 12.7 | 12.2 |
Reference:
[1] Benchmarking and Improving Robustness of 3D Point Cloud Recognition against Common Corruptions. arXiv 2022.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have raised my rating to 6 and thanks for the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thanks for your satisfaction with our reply! We greatly appreciate your positive evaluation. We will incorporate the additional experiments and improve the paper in the final version. If you have any further concerns or questions, please do not hesitate to reach out. We are committed to addressing any remaining issues promptly and thoroughly. Thank you again for your valuable feedback and best wishes! | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers and community members for their efforts in evaluating the paper and writing suggestions that greatly help us improve the work! Please find our responses to your individual questions below. We look forward to discussing any issues further should you have any follow-up concerns!
Pdf: /pdf/06c7ea9181d9472a09f498393c995b819074c41a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales? | Accept (poster) | Summary: This manuscript explores the challenge of noisy rationales in LLMs. The authors introduce the NoRa dataset, specifically designed to evaluate LLMs' robustness to noisy rationales. They reveal a widespread vulnerability among LLMs to such noise, despite advancements in in-context learning. To address this challenge, they propose the CD-CoT method, which enhances denoising-reasoning capabilities by contrasting noisy rationales with clean rationales. The authors conduct comprehensive evaluations using the NoRa dataset and demonstrate the vulnerability of LLMs to noisy rationales. They also show that CD-CoT significantly improves the performance of LLMs by rectifying noisy rationales. The manuscript contributes by formalizing the problem of noisy rationales, constructing the NoRa dataset, evaluating LLMs' robustness, and proposing the CD-CoT method as a solution.
Strengths: 1. The manuscript addresses an under-explored challenge in LLMs - the issue of noisy rationales in chain-of-thought prompting. By focusing on the noisy rationales problem, the authors bring attention to a practical challenge that arises in various domains, such as crowdsourced platforms, dialogue systems, and machine-generated data.
2. The authors construct the NoRa dataset, which serves as a comprehensive testbed for evaluating LLMs' robustness in reasoning with noisy rationales. The dataset covers various reasoning tasks, including mathematical, symbolic, and commonsense domains. The formalization of noisy rationales by adding irrelevant or inaccurate thoughts, along with controlling the reasoning difficulty through different noise ratios, enhances the dataset's reliability and usefulness.
3. The manuscript provides a thorough evaluation of various LLMs using the NoRa dataset. The authors disclose the intrinsic vulnerability of LLMs to noisy rationales and demonstrate significant accuracy decreases compared to the clean scenario. This evaluation highlights the importance of addressing the noisy rationales problem and motivates the development of robust methods.
Weaknesses: See the questions listed below.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. While the manuscript introduces the CD-CoT method as a solution to address the noisy rationales problem, it does not extensively compare CD-CoT with other existing methods or approaches. Including a comparative analysis with alternative denoising or reasoning enhancement techniques would provide a better understanding of CD-CoT's effectiveness and its advantages over other methods.
2. While the manuscript focuses on addressing the noisy rationales problem and proposes the CD-CoT method as a solution, it does not extensively analyze or provide insights into the underlying causes of the vulnerability of LLMs to noisy rationales. A deeper exploration of the reasons behind this vulnerability could contribute to a better understanding of the problem and potentially inspire further research directions.
3. The authors should provide additional information regarding the underlying mechanisms of CD-CoT. For instance, within the first step of CD-CoT, the introduction of "Rationale Selection" is mentioned to denoise the rephrased results. However, the authors have not clarified which specific technique they employ to achieve answer matching.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Similar to questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. Please find the point-to-point responses below. Any further comments and discussions are welcome!
> Q1. About the baseline methods.
**Reply**: Thanks for this question. **We would like to kindly point out that we have included extensive baseline methods.**
We clarify the baseline methods as follows:
- **In Section 4, we employ five representative methods as baselines**, i.e., ISC, SP, SM, SD, and SC, encompassing the two traits of self-correction and self-consistency. ISC and SP exemplify self-correction, focusing on response rectification and prompt rephrasing, respectively. SM, SD, and SC fall under self-consistency: SM injects perturbations into prompts for robustness, SD masks prompts and asks LLMs to reconstruct them. At the same time, SC directly samples outputs without preprocessing prompts.
- **In Section 5, we employ three methods that require additional information:** (1) SCO utilizes the ground truth answers of test questions to determine when to terminate the self-correction loop; (2) BT guides self-correction by providing the model with the position of the initial noise; and (3) CC conducts direct reasoning with all the clean or noisy examples without any kinds of denoising.
- **The empirical results in Section 5.2 show that the proposed CD-CoT method outperforms all these baseline methods.**
- **Besides, in Appendix B, we conduct a detailed literature review,** covering the related work in terms of in-context learning (B.1), self-correction methods (B.2), self-consistency methods (B.3), and external supervision (B.4). We further discuss the relation between our work and literature in B.5.
Therefore, we have conducted a comprehensive literature review and have compared many baseline methods in experiments. **Please refer to these contents and tell us if any baseline method or approach should be included in discussions or experiments.** We will definitely do this in the revision.
> Q2. A deep exploration of the LLMs’ vulnerability to noisy rationales.
**Reply**: Thanks for this insightful comment.
**In this work, we conduct the first systematic study on the LLMs’ vulnerability to noisy rationales.** Section 4, Appendix F.5, and Appendix F.10 of our submission summarize several observations and insights.
In Section 4, we reveal
- the general vulnerability to noisy rationales
- ineffectiveness of self-correction methods
- limited efficacy of self-consistency methods
- temperature sensitivity to noisy rationales
- complex impact of increasing prompting examples
- the universal vulnerability across different LLMs
In Appendix F.5, we show that
- task-dependent vulnerability patterns
- varying impact across noise types
- heightened sensitivity to inaccurate thoughts
- task-specific robustness
In Appendix F.10, we reveal
- the resilience to shuffled input-rationale-answer mappings
- the sensitivity to the rationale and label distribution
**Considering the only black-box access of several LLMs, e.g., GPT-3.5 and Gemini, we believe that broader and deeper investigations can be conducted with open-source LLMs in future work.** In this context, the above observations and benchmarks in our work provide the foundation. Specifically,
- In future work, we plan to extend our research to white-box models to gain deeper insights into the impact of noisy rationales. We intend to investigate the effects of rationale noise on model attention patterns and perplexity, observe changes in input attention during the model's reasoning process, and analyze how these attention shifts correlate with the model's performance under noisy conditions.
- These investigations will help us better understand the mechanisms by which noise in rationales affects model reasoning. By examining the internal dynamics of white-box models, we aim to uncover the underlying reasons for LLMs' vulnerability to noisy rationales and potentially develop more robust reasoning methods.
- Besides, CoT and its variants have predominantly focused on deductive reasoning, leaving inductive reasoning largely unexplored. Investigating the ability of LLMs to extract rules from noisy examples is a compelling area. Additionally, theoretical analysis of noisy ICL can offer deeper insights into the noisy rationales problem.
**Therefore, we sincerely appreciate your insightful comment and will definitely continue to explore the underlying reasons for LLMs’ vulnerability to noisy rationales.**
> Q3. The technical details of the proposed CD-CoT method.
**Reply**: Thanks for this technical question.
We would like to clarify the rationale selection along with the answer matching operation in CD-CoT.
**The rationale selection (step 2) selects the rationales to deduce the true answer.**
- As the rephrased rationales can still contain noisy information, and each rationale can deduce an answer to the question, we select the rationales that the corresponding answers match the given (true) answer of this demonstration. This is called the “answer matching” and does not require an LLM for inference.
- For example, (Q, R, A) indicates the question, rationale, and answer of a given noisy demonstration. The rationale rephrasing (step 1) obtains three rephrased demonstrations: (Q1, R1, A1), (Q2, R2, A2), and (Q3, R3, A3). Then, if A1=A2=A and A3!=A, we will select the first two rephrased rationales, R1 and R2.
- Namely, only rephrased results with consistent answers are retained, forming the refined candidate pool for that noisy demonstration of in-context learning.
We will clarify the above in the revision. Please also refer to Appendix E. 2, where more technical details and the full algorithm of CD-CoT are introduced.
**We would thank reviewer ajBs again for the valuable comments!**
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' response, which addresses my concerns.Thank you for your rebuttal. I'm satisfied with your clarification of the chosen baselines, your exploration of the LLM's vulnerability to noisy rationales, and the details of CD-COT. I will be raising my scores to "Accept."
However, I still recommend that the authors remove some rephrasing examples and case studies at appendices in the final version, as the paper is still difficult to read due to its length.
---
Reply to Comment 1.1.1:
Title: Many thanks for your positive support and constructive comments!
Comment: Hi Reviewer ajBs,
Thank you so much for your comments and appreciation!
We will follow your suggestion to reduce the examples and case studies to shorten the number of pages.
We have also provided a revision plan for improving the presentation (in response to W1 of reviewer bpvC).
Please feel free to interact with us if you have any further comments.
Best regards,
Authors of #1934
---
Rebuttal 2:
Title: Would you mind checking our response and confirming whether you have any further questions?
Comment: Dear Reviewer ajBs,
Thanks for your time and comments on our work!
We have tried our best to address your concerns and provided detailed responses to all your comments and questions.
Would you mind checking our response and confirming whether you have any further questions?
Best regards,
Authors of #1934
---
Rebuttal Comment 2.1:
Title: Please discuss with authors
Comment: Dear Reviewer ajBs,
Please respond to author rebuttal and discuss with authors.
Thanks,
Your AC | Summary: While previous work focuses on LLMs' stability over noisy questions, this paper investigates the robustness of LLMs to noisy rationales in CoT prompting. The authors introduce the NoRa dataset for this task, which inserts irrelevant or inaccurate sentences into the reasoning steps. They show that LLMs are significantly affected by such noise and propose a novel method, CD-CoT, to address the issue. The method contrasts noisy rationales with a clean one to improve robustness. The results show that the proposed method improves the performance over noisy rationales. The key idea is similar to traditional adversarial attack for QA to evaluate model's robustness by inserting distracting sentences.
Strengths: 1. The focus on noisy rationales in CoT prompting is an under-explored area.
2. The thorough evaluation of various LLM backbones and baselines.
Weaknesses: 1. I am not convinced by the necessity of exploring tasks with noisy rationales in the ICT. The main problem is that the generated rationale can be noisy (T_test). However, the clean rationales in the demonstrations (T_1 to T_n) are more than adequate for the types of reasoning tasks evaluated in the paper. For example, the experiments in Table 3 do not represent a common scenario that should be faced by the baseline models where the demonstrative rationales are noisy.
2. Claiming to be the first to explore noisy rationales seems overstated. For example, contrastive CoT used in the baseline also deals with noisy rationales.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is exploring LLM performance with noisy rationales in demonstrations important when clean demonstrations for these task types are easily available?
2. In Appendix F4, three different types of irrelevance are defined. But only one way to calculate relevance is mentioned, i.e., calling an API for cosine similarity. How do you differentiate the calculation for level 1, level 2, and level 3 irrelevance?
3. Why does w/SD perform better when inaccurate/irrelevant sentences are inserted?
4. Are there specific properties of noisy rationales that CD-CoT handles better or worse? Why?
5. How would the method generalize to other types of reasoning tasks beyond those covered in the NoRa dataset?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have acknowledged the limitations regarding the need for clean rationales in their proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. Please find the point-to-point responses below. Any further comments and discussions are welcome!
> W1. About the noisy rationales in in-context learning.
**Reply**: **The rationales in the demonstrations (T_1 to T_n) can be noisy in practice, which is the main problem.** This problem is caused by diverse sources such as crowdsourced platforms, dialogue systems, and machine-generated data. We have extensively discussed the cause of noisy rationales in Appendix C.1, with several real-world examples in Appendix C.2. Besides, we provide examples in Table 2 in the extra PDF file, showing that noisy inputs (T_1 to T_n) can lead to noisy outputs (T_test).
**Empirically, we reveal the widespread vulnerability among prevailing LLMs to noisy rationales, with limited efficacy from existing reasoning methods.** Compared with clean rationales, most cases in Table 3 show a 15-30% decrease with irrelevant noise and a more drastic 20-80% decrease with inaccurate noise.
**Therefore, we would argue that noisy rationales belong to a practical and challenging problem.** It is largely overlooked by existing work (assuming ICL demonstrations are clean) and should be paid more attention to. We believe the NoRa dataset and the insightful observations in this work can contribute to the community for building trustworthy foundation models.
> W2. Differences with a related work (CC).
**Reply**: Our submission defines the noisy rationale problem as **“factually inaccurate or irrelevant reasoning steps paired with valid question-answer prompts.”** Figure 1 shows an example. Here, only one rationale is given in each demonstration, which can be potentially noisy but unknown to the model.
However, in CC’s setting, each ICL demonstration explicitly includes a clean rationale and a wrong rationale. An example is shown in Figure 1 in CC’s paper. **Notably, this rationale is wrong instead of noisy as it induces the wrong answer. Therefore, CC’s setting and ours are totally different.**
In addition, empirical results in Table 7 show that our method CD-CoT significantly outperforms CC when given the same information.
> Q1. About the practice of noisy rationales.
**Reply**: We agree that it is unnecessary to consider noisy rationales when extensive clean demonstrations are available. However, clean demonstrations are not always available in practice, especially when requiring experts’ domain knowledge, e.g., medical diagnosis. **In this context, either human-annotated or machine-generated demonstrations can be noisy as we respond to W1.**
Moreover, our work considers the practical scenario that most ICL demonstrations are clean and only a few contain noise. For example, in base-9 of Table 3, introducing easy-level inaccurate noise led to a 50% decrease in accuracy. More empirical results are in Appendix F.5. **These underscore the practical importance and challenges of addressing noisy rationales, even when they appear infrequently.**
> Q2. About the levels of irrelevance.
**Reply**: To clarify, we first defined three levels of irrelevance, i.e., Level-1 (topic-irrelevant), Level-2 (topic-relevant but task-irrelevant), and Level-3 (topic-relevant, task-relevant, but not helpful). Using these definitions to build prompts, we then employed GPT-4 to generate corresponding irrelevant content for each type.
Next, we calculated cosine similarity scores to illustrate the varying degrees of relevance across these predefined levels. **The scores below align with our qualitative categorization**, offering a more concrete understanding of the semantic distances between the different levels of irrelevance.
| Cosine Similarity | Level-1 | Level-2 | Level-3 |
| :---- | :---- | :---- | :---- |
| Math Base-9 | 0.75 | 0.87 | 0.88 |
| Symbolic Equal | 0.73 | 0.79 | 0.82 |
> Q3. About the performance of SD.
**Reply**: **SD's denoising effect relies on LLMs' intrinsic capability.** SD performs five maskings and reconstructions per noisy example, concatenating results into five prompts for LLM inference. In contrast, SC performs five direct inferences on noisy prompts without explicit denoising. By comparing SD and SC's performance, we can gain insights into the model’s denoising effects.
**In Table 3, SC outperforms SD in Math and Sym tasks, while SD only marginally excels in the Common task.** This pattern persists in the clean settings. This suggests that LLMs struggle to reconstruct masked prompts in complex, domain-specific tasks but perform better in simpler common tasks, highlighting the varying levels of internal knowledge within LLMs across different domains.
**Besides, the counterintuitive results on the Common task may be attributed to task-specific characteristics.** Observation of reconstructed masked prompts in the Common task indicates that LLMs tend to bypass reconstruction instructions in noisy settings, directly providing final answers, as shown in Tables 80-82. This accidental removal of all rationales, including noisy ones, effectively acts as a noise filter. Combined with the LLM's natural strength in the Common task, this unplanned filtering likely explains the small improvement in accuracy under noisy conditions.
> Q4. Characteristics of CD-CoT.
**Reply**: In Table 7, CD-CoT performs better in handling irrelevant noise compared to inaccurate noise at the same level. **This is because irrelevant noise is easier to distinguish from the target information and, therefore, more readily removed during the rephrase step when performing contrasting denoising.** Further, Table 3 of the extra PDF file provides the denoised results under high-noise settings.
> Q5. The generalization ability of CD-CoT.
**Reply**: Please refer to the general response, where we empirically justify the proposed CD-CoT method's generalization ability to other datasets.
We will include the above discussions in the revision. **We would thank reviewer qSQZ again for the valuable comments.**
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses during the rebuttal period. However, many of my concerns remain unresolved.
- For w1 & q1: few-shot prompting relies on using a few clean and accurate examples, minimizing the need for extensive training data. The noise mentioned in the appendix can be mitigated by carefully selecting well-annotated demonstrations, as is common in existing CoT methods. The authors' assertion that a few clean demonstrations is difficult seems overstated.
- Contrastive CoT addresses performance issues related to incorrect reasoning, while inaccurate rationales is also an important part of noise explored in this work.
- Q2: While the response clarifies the levels of irrelevance, the paper lacks a clear description of how different promptings realize these levels.
- Q3 remains unaddressed. The question was why w/SD outperforms in commonsense/symbolic equal tasks with irrelevant and inaccurate rationales compared to clean ones.
---
Reply to Comment 1.1.1:
Title: A further response to Reviewer qSQZ (1/3)
Comment: We would like to thank reviewer qSQZ for the further comments. Here is our further response.
> For w1 & q1: few-shot prompting relies on using a few clean and accurate examples, minimizing the need for extensive training data. The noise mentioned in the appendix can be mitigated by carefully selecting well-annotated demonstrations, as is common in existing CoT methods. The authors' assertion that a few clean demonstrations is difficult seems overstated.
**Reply:** Thanks for the comment. We would further explain the noisy demonstrations in practice.
**In fact, the in-context learning of LLMs suffers from the disadvantages of susceptibility to selected demonstrations and the intricacy of generating these demonstrations.** Several recent investigations on noisy questions [1] have shown that (i) LLMs can be distracted by irrelevant or adversarial context and (ii) the LLM reasoning is unstable to the small modifications in prompts. Besides, another line of noisy answers justifies misleading the LLM to agree with factual errors. Our original submission has already discussed these in Section 2 and Appendix B.
**The key point is that humans can inevitably make mistakes in practice, which can mislead the models.** Even machine learning practitioners can make mistakes in data annotation, which motivates extensive research on label-noise learning [2,3,4]. **Similarly, there is no guarantee for clean demonstrations in practice, and LLMs can encounter noisy demonstrations provided by diverse users with different experiences and background knowledge.**
In this context, behind the outstanding feasibility of CoT methods, the LLMs’ robustness against noisy inputs, such as noisy questions [1] and noisy rationales studied in this work, should be given more attention. All four reviewers acknowledge this under-explored research problem.
This work exceeds the ideal assumption of obtaining clean demonstrations and reveals the existing CoT methods’ unsatisfactory robustness against noisy rationales. It presents the LLMs’ fundamental weakness in dealing with noisy rationales that might be unseen from the training data, similar to the jailbreak attack [5] and the reversal curse [6].
**Besides, constructing and selecting well-annotated demonstrations is non-trivial and costly.** On one hand, LLMs have been proven sensitive to the ICL examples [7]. On the other hand, human annotations of ICL examples can be expensive, as we have discussed in the Appendix and rebuttal responses. Therefore, incorporating more human supervision in dealing with noisy rationales is feasible but can be expensive.
In addition, the problem of noisy labels can also be solved by well-annotated labels by humans. However, numerous noisy benchmarks and robust methods are proposed to improve the model’s robustness. A robust learning and reasoning strategy is desired to deal with the noisy data.
References
[1] F. Shi et al. Large language models can be easily distracted by irrelevant context. In ICML, 2023.
[2] N. Natarajan et al. Learning with Noisy Labels. In NIPS, 2013.
[3] L. Jiang et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. In ICML, 2018.
[4] Z. Zhang et al. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. In NIPS, 2018.
[5] A. Wei et al. Jailbroken: How Does LLM Safety Training Fail? In NeurIPS, 2023.
[6] L. Berglund et al. The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A". In ICLR, 2024.
[7] Y. Lu et al. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. In ACL, 2022.
> Contrastive CoT addresses performance issues related to incorrect reasoning, while inaccurate rationales is also an important part of noise explored in this work.
**Reply:** Thanks for the comment. We agree that the settings of these two works are relevant but also greatly different. As we responded to W2, we investigated noisy rationales (deducing the right answer) instead of wrong rationales (deducing the wrong answer). We will incorporate more discussions with this work in revisions.
---
Reply to Comment 1.1.2:
Title: A further response to Reviewer qSQZ (2/3)
Comment: > Q2: While the response clarifies the levels of irrelevance, the paper lacks a clear description of how different promptings realize these levels.
**Reply:** Thanks for the comment. We would further clarify our methodology for generating different levels of irrelevant noise.
**Definition and Prompt Engineering:** We first define the three levels of irrelevance:
- Level-1: Topic-irrelevant
- Level-2: Topic-relevant but task-irrelevant
- Level-3: Topic-relevant, task-relevant, but not helpful
With this definition, we then craft prompts for GPT-4 to generate corresponding irrelevant content. The basic structure of our prompt is as follows:
> We define irrelevant noise in reasoning as information that does not contribute to solving the given problem or reaching the correct conclusion. To mimic real-world scenarios, we categorize this noise into 3 levels:
> 1. Level-1 (Topic-irrelevant): Statements completely unrelated to the topic or domain of the question.
> 2. Level-2 (Topic-relevant but task-irrelevant): Statements related to the general topic but not directly applicable to solving the specific task.
> 3. Level-3 (Topic-relevant, task-relevant, but not helpful): Statements that seem relevant to both the topic and task but do not actually aid in reaching the correct solution.
> Given the question {Q} and answer {A}, please generate a Level-{X} irrelevant statement after each reasoning step. Provide {K} examples of such statements.
> Important notes:
> - The inserted noise should not disrupt the original reasoning logic.
> - The irrelevant statements should be plausible in the context of the question but not contribute to solving it.
> - Ensure that the level of irrelevance matches the specified Level-{X}.
> Please proceed with generating the irrelevant statements as requested.
Based on the above definition and prompt, we generate the data with the following four steps.
- **Step-1: Initial Generation and Human Evaluation.** We used this prompt to generate an initial set of irrelevant statements for each level. These were then manually reviewed and filtered to ensure they accurately represented the intended level of irrelevance. We selected high-quality examples for each level.
- **Step-2: Scaled Generation.** Using these high-quality examples as in-context learning demonstrations, we prompted GPT-4 to generate a larger set of irrelevant statements for each level.
- **Step-3: Validation through Similarity Analysis.** To confirm that our generated statements indeed represented different levels of irrelevance, we conducted a cosine similarity analysis. This analysis quantitatively demonstrated the semantic differences between levels, as shown in our previous response.
- **Step-4: Dataset Construction.** Finally, we integrated these generated irrelevant statements into our dataset. We inserted them into relevant demonstrations at appropriate positions, following a probability distribution that corresponds to the intended difficulty level of the task.
We will add the above technical details to our submission.
---
Reply to Comment 1.1.3:
Title: A further response to Reviewer qSQZ (3/3)
Comment: > Q3 remains unaddressed. The question was why w/SD outperforms in commonsense/symbolic equal tasks with irrelevant and inaccurate rationales compared to clean ones.
**Reply:** Thanks for the comment.
Following your comments, we thoroughly examine the empirical results from the log files. We identify some inconsistencies in the SD method's performance, particularly in the symbolic equal and commonsense tasks.
SD in the symbolic equal task:
- In this task, comparing w/SD and w/SC reveals that the domain knowledge of LLM is relatively weak. According to the experiment results, when SD rephrases symbolic tasks, it often **directly returns the final answer** of the original response in many cases.
- **Notably, the quality of the rephrased results basically does not change with the increase in noise.** The examples of rephrased rationales can be found in Tables 65-76 of Appendix F.9.
- As a result, the accuracy of w/SD under both clean and noisy settings is very close, with a maximum difference of **less than 1%**, which we believe is within the **acceptable** range of error.
SD in the commonsense task:
- We observed different behaviors in clean and noisy settings.
- In noisy settings, SD does not follow the prompt to make minor corrections. **Instead, SD generates short rationales with only the answer, discarding the original rationale** (as shown in the examples below). This simplification can benefit commonsense tasks where the model already has a strong inherent understanding.
- On the other hand, in the clean setting, SD unexpectedly **generates entirely new rationales** instead of making minor corrections. These new rationales may deviate from the original correct reasoning, potentially **introducing errors or inconsistencies** that weren't present in the initial, accurate solution.
Next, we show two representative examples of SD in the commonsense task. We show that
- given a clean rationale, SD generates entirely new rationales instead of making minor corrections;
- given a noisy rationale, SD generates short rationales with only the answer, discarding the original rationale.
**Example 1:**
- The original clean rationale:
> Carrie is Hugh's sister's husband's daughter, so the relations path is sister, husband, daughter. For husband's daughter, we have husband's daughter is daughter. So the relations path are reduced to sister, daughter. For sister's daughter, we have sister's daughter is niece. So the relations path are reduced to niece. Therefore, **Answer: niece.**
- The rephrased rationale by SD (given the clean rationale):
> To determine Carrie's relationship to Hugh, we can break down the given information step by step:
> 1. Carrie is Hugh's sister's husband's daughter.
> 2. Hugh's sister is Carrie's sibling.
> 3. Hugh's sister's husband is Carrie's brother-in-law.
> 4. Carrie is the daughter of Hugh's sister's husband. Therefore, the **answer is: Carrie is Hugh's niece.**
- The original noisy rationale:
> Carrie is Hugh's sister's husband's daughter, so the relations path is sister, husband, daughter. For husband's daughter, we have husband's daughter is daughter. We have daughter's grandmother is niece. So the relations path are reduced to sister, daughter. For sister's daughter, we have sister's daughter is niece. We have daughter's niece is niece. So the relations path are reduced to niece. Therefore, **Answer: niece.**
- The rephrased rationale by SD (given the noisy rationale):
> **niece.**
**Example 2:**
- The original clean rationale:
> Francisco is Lynn's son's father's brother, so the relations path is son, father, brother. For father's brother, we have father's brother is uncle. So the relations path are reduced to son, uncle. For son's uncle, we have son's uncle is brother. So the relations path are reduced to brother. Therefore, **Answer: brother.**
- The rephrased rationale by SD (given the clean rationale):
> To determine Francisco's relationship with Lynn, we start with the information provided: Francisco is Lynn's son's father's brother.
> Step 1: Lynn's son's father is Lynn's husband.
> Step 2: Francisco is Lynn's husband's brother.
> Based on this information, we can conclude that Francisco is Lynn's brother-in-law. **Answer: brother-in-law**
- The original noisy rationale:
> Francisco is Lynn's son's father's brother, so the relations path is son, father, brother. For father's brother, we have father's brother is uncle. So the relations path are reduced to son, uncle. For son's uncle, we have son's uncle is brother. We have uncle's niece is nephew. So the relations path are reduced to brother. We have brother's nephew is mother. Therefore, **Answer: brother.**
- The rephrased rationale by SD (given the noisy rationale):
> **Answer: brother.**
We will include the above discussions in the revision. **We would like to thank reviewer qSQZ again for the comments in the discussion phase.** Any further comments or questions are welcome!
---
Rebuttal 2:
Title: Would you mind checking our response and confirming whether you have any further questions?
Comment: Dear Reviewer qSQZ,
Thanks for your time and comments on our work!
We have tried our best to address your concerns and provided detailed responses to all your comments and questions.
Would you mind checking our response and confirming whether you have any further questions?
Best regards,
Authors of #1934
---
Rebuttal Comment 2.1:
Title: Please discuss with authors
Comment: Dear Reviewer qSQZ,
Please respond to author rebuttal and discuss with authors.
Thanks,
Your AC
---
Rebuttal 3:
Title: [Last-day Reminder] We are anticipating your post-rebuttal feedback!
Comment: Dear Reviewer qSQZ,
**Thanks very much for your time and valuable comments.**
We understand you might be quite busy. However, the discussion deadline is approaching, and we have only around **one day** left.
**We believe that our responses—detailed clarifications with empirical results—are sufficient to address the questions you raised.** Specifically, we
- discuss noisy rationales in real scenarios (W1, Q1)
- discuss the relationship and differences with related works (W2)
- clarify the evaluation metric (Q2)
- further explain the empirical results and findings (Q3, Q4)
- conduct additional experiments with CD-CoT (Q5)
**Would you mind checking our response and confirming whether you have any further questions?**
Thanks for your attention.
Best regards,
Authors of #1934 | Summary: The paper proposes a new noisy rationales dataset, to evaluate LLMs' robustness of reasoning across various reasoning domains, covering math, symbolic, and common sense. The datasets is formed by adding irrelevant or inaccurate thoughts into rationales. Existing LLM like GPT 3.5 would struggle on this newly proposed dataset. The authors propose to rectify the rationales with Contrastive Denoising with noisy CoT, which greatly achieves the accuracy improvements.
Strengths: 1. As far as I know, this paper is among the first to explore the noisy rationale problem and it provides many useful insights.
2. Authors evaluated the noisy rationale problem on the latest GPT 3.5, Gemini-Pro, etc. to demonstrate the issue, and meanwhile proposed a solution CD-CoT to address this problem.
3. The insights behind the dataset creation are delineated thoroughly, and align with the evaluation and observations.
Weaknesses: 1. Even though the dataset covers 3 domains including math, symbolic, and commonsense, the specific tasks are confined to certain subtasks like base-9, equal-length. The generalization capability on the proposed method may bring some concerns.
2. Given that some eval metrics are new, it's better if you bring more descriptions and explanations to the main context.
3. In the proposed CD-CoT method, does selection or voting require a separate LLM?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Could you elaborate on "answer matching" from step 2?
2. When you generate the rationales, would `N` incur large computation needs?
3. Is CD-CoT sensitive to the prompt designs?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. Please find the point-to-point responses below. Any further comments and discussions are welcome!
> W1. The generalization ability of the proposed CD-CoT method.
**Reply**: Thanks for this valuable comment.
**Please refer to the general response**, where we further discuss and empirically justify the proposed CD-CoT method's generalization ability to other datasets.
> W2. About the evaluation metric.
**Reply**: Thanks for this helpful comment.
**We would like to clarify the usage of evaluation metrics.**
- The evaluation metric used in the main content is the accuracy introduced in Section 4.
- The other metric, Normalized Difference in Accuracy (NDA), in the appendix, is only an auxiliary tool for analyzing empirical results. This metric quantifies the efficacy of a given LLM and denoise method under the noisy scenario (details in Appendix F.2).
Please note that putting the introduction and analysis of the NDA metric in the main context will make it too crowded. Besides, NDA does not influence the empirical observations and analysis of the main content.
Therefore, we introduce the NDA metric in the appendix and have built a jump link in Section 4. We will further clarify the usage of metrics in the revision.
> W3 & Q1. Technical details of the proposed CD-CoT method.
**Reply**: Thanks for this technical question.
**The rationale selection (step 2) and answer voting (step 4) do not require using an LLM.**
Specifically, **the rationale selection (step 2) selects the rationales to deduce the true answer.**
- As the rephrased rationales can still contain noisy information, and each rationale can deduce an answer to the question, we select the rationales that the corresponding answers match the given (true) answer of this demonstration. This is called the “answer matching” and does not require an LLM for inference.
- For example, (Q, R, A) indicates the question, rationale, and answer of a given noisy demonstration. The rationale rephrasing (step 1) obtains three rephrased demonstrations: (Q1, R1, A1), (Q2, R2, A2), and (Q3, R3, A3). Then, if A1=A2=A and A3!=A, we will select the first two rephrased rationales, R1 and R2.
- Namely, only rephrased results with consistent answers are retained, forming the refined candidate pool for that noisy demonstration of in-context learning.
**The answer voting (step 4) does not require an LLM as well.**
- Given D answers from step 3, we equally vote them into a final answer.
- For example, if the answer set is {1,1,1,2,3} where D=5, the answer voting will select “1” as the final answer for its highest frequency.
We will clarify the above in the revision. Please also refer to Appendix E. 2, where more technical details and the full algorithm of CD-CoT are introduced.
> Q2. When you generate the rationales, would N incur large computation needs?
**Reply**: Thanks for this insightful comment.
In the main content, we present the impact of parameters M, D, and C on token usage in Table 10.
**Here, we conduct additional experiments to figure out the effect of varying N on the computational cost.** Specifically, we maintain a constant number of reasoning repetitions D=5 while adjusting N and other parameters. Here are the configurations for testing:
1. N=1, M=1, C=[5], D=5
2. N=2, M=2, C=[3,2], D=5
3. N=3, M=2, C=[3,2], D=5
4. N=4, M=2, C=[3,2], D=5
5. N=5, M=2, C=[3,2], D=5 (the default configuration)
These experiments are conducted on the NoRa-Math base-9 task with irrelevant hard noise. The table below shows the total number of tokens consumed by CD-CoT for complete reasoning on 300 test samples. This includes tokens used for both rephrasing and reasoning steps.
| N | 1 | 2 | 3 | 4 | 5 |
| :---- | :---- | :---- | :---- | :---- | :---- |
| tokens | 1071560 | 1408845 | 1532606 | 1656617 | 1780095 |
As we can observe, the number of tokens generated increases as N increases. This growth in token count directly correlates with increased computational needs. **Notably, the computational cost does not scale linearly with N.**
> Q3. Is CD-CoT sensitive to the prompt designs?
**Reply**: Thanks for this insightful comment.
Section 5.1 provides the prompt for contrastive rationale rephrasing. **Here, we generated several variants of prompts to investigate their sensitivity to the proposed CD-CoT method.**
A simpler, shorter prompt:
> Here are two examples: the first one has proper explanation and answer, while the second one has distracted explanation and correct answer. Please follow the first example's explanation and provide the correct explanation and answer for the second one.
A more complex, longer prompt:
> The following presents two examples of the same type of task. The first example contains both a correct explanation and a correct answer. The second example, however, includes a distracted explanation but still provides the correct answer. Your task is to analyze these examples and then provide a revised version explanation of the second example along with its answer. Ensure that your revised explanation is logically consistent with the first example.
Then, we conduct additional experiments on Math Base-9 to compare these three prompts. The results in the table below show that **the performance of CD-CoT is only marginally influenced by these prompts.**
| method | Irrelevant-medium | Inaccurate-medium|
| :---- | :---- | :---- |
| Base Model | 0.28 | 0.08 |
| CD-CoT w/ original prompt | 0.49 | 0.48 |
| CD-CoT w/ short prompt | 0.46 | 0.46 |
| CD-CoT w/ long prompt | 0.47 | 0.48 |
Note that CD-CoT's prompt remains simple. Cooperating with advanced methods for iterating prompts and rationales, such as APE [1] and Star [2], can further improve this reasoning method.
References:
[1] Y. Zhou et al. Large language models are human-level prompt engineers. In ICLR, 2023.
[2] E. Zelikman et al. Star: Bootstrapping reasoning with reasoning. In NeurIPS, 2022.
---
Rebuttal 2:
Title: Would you mind checking our response and confirming whether you have any further questions?
Comment: Dear Reviewer HgRq,
Thanks for your time and comments on our work!
We have tried our best to address the concerns and provided detailed responses to all your comments and questions.
Would you mind checking our response and confirming whether you have any further questions?
Best regards,
Authors of #1934
---
Rebuttal Comment 2.1:
Title: Please discuss with authors
Comment: Dear Reviewer HgRq,
Please respond to author rebuttal and discuss with authors.
Thanks,
Your AC
---
Rebuttal 3:
Title: [Last-day Reminder] We are anticipating your post-rebuttal feedback!
Comment: Dear Reviewer HgRq,
**Thanks very much for your time and valuable comments.**
We understand you might be quite busy. However, the discussion deadline is approaching, and we have only around **one day** left.
**We believe that our responses—detailed clarifications with empirical results—are sufficient to address the questions you raised.** Specifically, we
- conduct additional experiments with CD-CoT (W1, Q2, Q3)
- clarify the evaluation metric (W2)
- clarify the technical details of the CD-CoT method (W3)
**Would you mind checking our response and confirming whether you have any further questions?**
Thanks for your attention.
Best regards,
Authors of #1934 | Summary: This paper introduces the NORA dataset and a new technique called Contrastive Denoising (CD) that paired with LLMs improves Chain-of-Thought (CoT) reasoning. The paper presents an extensive experimental evaluation over four different LLMs under all tasks in the NORA dataset and a lengthy comparison with CD.
Strengths: ## Originality
This paper addresses the problem of Noisy Rationales (NR), in contrast to that of Noisy Questions (NQ), which has been previously addressed in the literature. The introduction of a new dataset specific for NR is new to me and the CD strategy can also be helpful in different practical contexts.
## Quality and Clarity
The paper is of high quality, well-written, and easy to follow. All sections provide useful details for understanding the core parts of the paper and many more details are also included in the appendix.
## Significance
The contribution is excellent and constitutes a valid resource for future studies in NR. To the best of my knowledge, this is the first dataset proposed for studying the problem of NR. The method proposed (CD) is sound and outperforms reasonably other competitors, being tailored specifically for the NR task. This is good and will serve as a future baseline for future methods.
Weaknesses: The length of the paper (comprising all the material in support of the main paper) and the number of details are too wide for a submission in the NeurIPS main track, making the paper more suited for a journal publication. Nonetheless, the message, results, and method are clear from the presentation in the main paper.
Another aspect is that the submission would be more in line with a dataset & benchmark paper, mostly for the NORA dataset. In line with requirements for publishing datasets, authors should have taken into consideration the datasheet for datasets (see https://arxiv.org/abs/1803.09010), which is mandatory for reproducibility and use of the dataset. I will discuss with other reviewers and the AC the extent to which this limits the submission by the authors.
I would consider raising my score upon clarifying this point.
I found no particular weaknesses in the experimental benchmarking and the evaluation of the proposed method.
Technical Quality: 4
Clarity: 4
Questions for Authors: The authors suggest in the conclusions that other methods based on retrieval augmentation could constitute possible improvements to the issue of NR. Can you comment on [1] and whether this could have been already used for the task they propose?
[1] Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models, Yu et al. (2024)
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The presented contribution lacks mandatory requirements for the NoRa dataset, based on the guidelines of the dataset & benchmark track. This should be discussed to assess the eligibility for the paper to be published in this track.
The creation of NR is synthetic, not including extensions to what could be real noisy rationales that could have been influencing LLMs CoT reasoning in real scenarios. This is though, not a serious limitation given that NoRa is the first dataset proposed for NR.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. Please find the point-to-point responses below. Any further comments and discussions are welcome!
> W1. The presentation of the submission.
**Reply**: Thanks for this constructive comment!
We would kindly note that in the official guideline of NeurIPS 2024, “the main text and references may be followed by **technical appendices, for which there is no page limit.**”
Here, we would like to clarify the contents of our 106-page appendix. Specifically,
- Appendix A (1 page): a further discussion on broader impact, limitations, and extensions.
- Appendix B (4 pages): a detailed literature review.
- Appendix C (22 pages): a comprehensive overview of the constructed NoRa benchmark, with 7-page real-world examples and 8-page examples of NoRa.
- Appendix D (3 pages): the full theoretical analysis.
- Appendix E (2 pages): the implementation details of the proposed CD-CoT method.
- Appendix F (39 pages): the full experiment results, with 23-page rephrased examples of different denoising methods.
- Appendix G (28 pages): more case study of CD-CoT.
- Appendix H (7 pages): the NeurIPS paper checklist.
Notably, there are 15-page examples of the dataset and 51-page examples of the reasoning methods. **If necessary, we can reduce these examples and move them to a webpage.** Then, a **33-page** appendix will be obtained (the checklist is not counted), making it more suitable for the conference.
We will follow the reviewers’ and ACs’ suggestions for improving our submission. Any further suggestions or comments are definitely welcome!
> W2 & L1. The datasheet of the NoRa dataset.
**Reply**: Thank you so much for this constructive comment!
**We supplement the NoRa dataset datasheet in Table 1 of the extra PDF file**, and the source files of NoRa can be accessed by the anonymous github link in our submission.
**Besides, we would kindly point out that the main track papers of NeurIPS can also propose new datasets.** For example, [1] proposes the PRONTOQA-OOD dataset for benchmarking the out-of-demonstration reasoning capability of LLMs, [2] proposes the CLADDER dataset for causal reasoning, and [3] proposes the Clevr-4 dataset for category discovery.
**What’s more, our submission goes beyond proposing a new dataset: we also propose a new reasoning method, CD-CoT, to improve the reasoning robustness against noisy rationales.** In addition, based on NoRa, **we reveal several insights for the under-explored noisy rationale problem** that can be valuable for building trustworthy foundation models.
> Q1. Discussion with a related paper [4].
**Reply**: Thanks for recommending this paper. We carefully read it and had the following discussion.
- **Settings:** [4] explores the robustness of retrieval-augmented ICL against demonstration attacks and test sample attacks. It focuses on perturbing the example questions (i.e., noisy questions) or labels, while our work focuses on the rationales of the examples (i.e., noisy rationales).
- **Methodology:** The DARD method proposed in [4] improves the robustness of retrieval-augmented ICL against test sample attacks by introducing perturbed examples into the example pool.
- **Empirical observations:** [4] finds that retrieval-augmented ICL exhibits better robustness against test sample attacks. However, its robustness decreases when facing demonstration attacks, suggesting that LLMs are more sensitive to perturbations in demonstrations that are more similar to the test samples.
We will include the above discussion in the revision.
> L2. Extensions to the noisy rationales in real scenarios.
**Reply**: Thanks for this insightful comment.
We agree with your point. **Meanwhile, we would note that the noisy rationales are carefully designed to simulate scenarios in practical applications.**
**The noise generation is based on extensive research into the types of irrelevant or misleading information that can impact LLM reasoning.** Specifically,
- In Appendix C.1, we provide a comprehensive summary of the causes of irrelevant and inaccurate noise generated by both humans and models.
- In C.2, we present several real-world examples to illustrate how reasoning noise commonly occurs in daily in-context scenarios.
- Our synthetic noises are modeled after these real-world examples, ensuring that they closely mimic the types of interference frequently encountered in practical applications.
- Besides, our method of inserting synthetic noise allows for better control over the ratio, type, and distribution of noise, enabling a systematic evaluation of the noisy rationales.
Empirically, in addition to the standard evaluation of NoRa, **we also evaluate the effects of noisy rationales in different real-world scenarios.**
- In F.4, we introduce semantic difficulty levels of irrelevant content in our noisy rationales, aiming to better reflect the complexity and variability of noise encountered in actual applications.
- In F.5 and F.6, we consider different numbers of noisy thoughts and various numbers of noisy examples, including the ablation study on the order of noisy examples.
- In F.8, we investigate the noisy rationale problem in large-scale real-world scenarios by evaluating the impact of noisy context in multi-turn conversational QA tasks.
**We would like to thank reviewer bpvC again for these constructive suggestions!** We are committed to continually refining our work to ensure it closely aligns with real-world scenarios and challenges in LLM reasoning.
References:
[1] A. Saparov et al. Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples. In NeurIPS, 2023.
[2] Z. Jin et al. CLADDER: Assessing Causal Reasoning in Language Models. In NeurIPS, 2023.
[3] S. Vaze et al. No Representation Rules Them All in Category Discovery. In NeurIPS, 2023.
[4] S. Yu et al. Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models. Arxiv, 2024.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Thank you for the detailed reply and for providing the required documentation.
> We supplement the NoRa dataset datasheet in Table 1 of the extra PDF file.
Good, this seems an issue I only pointed out. I would believe this is fine with it.
> What’s more, our submission goes beyond proposing a new dataset: we also propose a new reasoning method, CD-CoT, to improve the reasoning robustness against noisy rationales. In addition, based on NoRa, we reveal several insights for the under-explored noisy rationale problem that can be valuable for building trustworthy foundation models.
Sure, I did not penalize this part of the contribution.
> We will include the above discussion in the revision.
Thank you for the comparison.
> The noise generation is based on extensive research into the types of irrelevant or misleading information that can impact LLM reasoning.
Thank you for pointing to those sections.
---
Reply to Comment 1.1.1:
Title: Many thanks for your positive support and constructive comments!
Comment: Hi Reviewer bpvC,
Thank you so much for your comments and appreciation! We really value your constructive feedback, as it helps us improve our work. We will carefully incorporate the above discussions into our submission.
Please feel free to interact with us if you have any further questions.
Best regards,
Authors of #1934 | Rebuttal 1:
Rebuttal: ### A General Response by Authors:
**We sincerely thank all four reviewers for their thoughtful suggestions on our submission.**
**We have received four reviews with positive ratings 6,6,5,5. We are glad that all the reviewers have good impressions of our work**, including
- an under-explored and critical problem (bpvC, HgRq, qSQZ, ajBs)
- construct a valuable dataset (bpvC, ajBs)
- a novel and helpful method to address the problem (bpvC, HgRq)
- comprehensive experiments and several insights (HgRq, qSQZ, ajBs)
- well-written and good presentation (bpvC, HgRq, qSQZ).
**In the rebuttal period, we have provided detailed responses to all the comments and questions point-by-point.** Specifically, we
- provide the datasheet of the NoRa dataset (W2 for bpvC)
- discuss the relationship and differences with related works (Q1 for bpvC, W2 for qSQZ, Q1 for ajBs)
- discuss noisy rationales in real scenarios (L2 for bpvC, W1, Q1 for qSQZ)
- clarify the evaluation metric (W2 for HgRq, Q2 for qSQZ)
- clarify the technical details of the CD-CoT method (W3 for HgRq, Q3 for ajBs)
- conduct additional experiments with CD-CoT (W1, Q2, Q3 for HgRq, Q5 for qSQZ)
- further explain the empirical results and findings (Q3, Q4 for qSQZ, Q2 for ajBs)
- provide a detailed revision plan for improving the presentation (W1 for bpvC), which will be implemented in submission.
Besides, in the extra [one-page PDF file](https://openreview.net/attachment?id=iZJJL6v8Gw&name=pdf), we provide the datasheet of NoRa (Table 1), examples that noisy inputs can lead to noisy outputs (Table 2), and examples of denoised results under high-noise settings (Table 3).
**Regarding W1 for reviewer HgRq and Q5 for reviewer qSQZ, in the following, we further discuss and empirically verify the generalization ability of the proposed CD-CoT method to other datasets.**
Recall that the constructed NoRa benchmark covers five prevailing datasets from three different domains.
The current LLMs present significant vulnerability to noisy rationales in all five datasets of NoRa, while the proposed method CD-CoT has shown advanced and consistent robustness against noisy rationales. Specifically,
- **Robustness with different datasets:** CD-CoT consistently outperforms other methods in all five datasets in NoRa.
- **Robustness with different noise levels:** The results shown in Tab. 7 demonstrate the remarkable robustness of CD-CoT to varying noise levels. Across the Math, Symbolic, and Commonsense tasks, the performance decline of CD-CoT remains modest as the noise level increases.
- **Robust with different LLMs:** The results in Tab. 9 further indicate that CD-CoT substantially improves over all three other baselines on the more powerful LLMs. Even on the relatively smaller Mistral-8x7B, CD-CoT significantly outperforms the other baselines on most tasks.
Note that Section 3 introduces a general framework for generating noisy rationales with existing datasets. This means more datasets can be integrated into NoRa if necessary for future research. **Here, we conduct additional experiments to generalize CD-CoT to three new datasets that are not covered in NoRa, including GSM-8K, Blocksworld, and BIG-Bench Hard Dyck Languages.** Specifically,
GSM-8K: A math dataset of linguistically diverse grade school math word problems.
Blocksworld: A planning dataset simulating block stacking tasks.
BIG-Bench Hard Dyck Languages: A symbolic dataset designed for predicting the sequence of closing parentheses in a Dyck-4 word.
Then, we generate noisy rationales and compare the following setups:
Zero-shot: Base model with no demonstration.
CoT (clean rationales): Base model with 3 clean demonstrations.
CoT (noisy rationales): Base model with 3 noisy demonstrations.
CD-CoT (noisy rationales): Base model with 3 noisy demonstrations and our CD-CoT method.
| Dataset | Zero-shot | CoT (clean rationales) | CoT (noisy rationales) | CD-CoT (noisy rationales) |
| :---- | :---- | :---- | :---- | :---- |
| GSM-8K (300 questions) | 84.3 | 87.7 | 84.3 | 86.0 |
| Blocksworld (200 questions) | 2.0 | 25.0 | 13.0 | 25.5 |
| BIG-Bench Hard Dyck Languages (250 questions) | 12.4 | 40.8 | 29.2 | 35.2 |
The reasoning accuracy in the table above shows that CD-CoT consistently outperforms the zero-shot setting and standard CoT prompting under noisy rationales. **This is consistent with the findings in our submission, showing CD-CoT’s strong capability of generalization to new datasets.** These empirical results and discussions will be included in the revision.
**Lastly, we would appreciate all reviewers again.** Would you mind checking our response and confirming whether you have any further questions? We are anticipating your feedback during the discussion period!
Pdf: /pdf/ffe9baca0ad5f9d6252dea4a78ffafa7a8f588ac.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Probing the Decision Boundaries of In-context Learning in Large Language Models | Accept (poster) | Summary: This study investigate in-context learning in LLMs by examining their decision boundaries in binary classification tasks. The authors investigate the performance of several mainstream models on these tasks. Despite achieving high test accuracy, the decision boundaries of these LLMs are often irregularly non-smooth. Factors such as model size and prompts' influence on decision boundaries are explored. Additionally, the author examines fine-tuning and adaptive sampling methods, finding them effective in improving boundary smoothness.
Strengths: - This is the first study, to my knowledge, to explore the decision boundaries of in-context learning LLMs.
- The experiments are thorough, and some findings, such as the use of uncertainty-aware active learning to help LLMs learn decision boundaries, are beneficial for future research.
Weaknesses: - The tests are mainly conducted on binary classification tasks, making it unclear if the findings can be generalized to other tasks.
- Although different model sizes are covered, these models are from different series. The reason for using these models should be clarified in the paper.
- The decision boundaries appear to be heavily influenced by the prompts, such as the design of labels and the order of in-context learning examples. Coule the authors explore the robustness of their results to variations in prompts, like different formats or synonym replacements? If the robustness to prompts is poor, applying these results to other fields could be challenging.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 69: “Large Language Models (LLMs) are trained on vast corpora of text using unsupervised learning.” Shouldn't this be self-supervised learning?
- Line 132-134: “For the open-source models, we use the approach described in 3.2 to get predictions. For the closed-source models, we use the next token generation as the prediction…” This is confusing since section 3.2 only mentions visualization methods. What are the differences in prompt design or experiments between these two approaches?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Overall, this is a good paper. I would raise my score if the authors could address my main concerns, especially regarding the robustness of their results to variations in prompts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Comment: Dear Reviewer AvnH,
Thank you for your feedback. We hope to address your questions and comments below.
> Q1:"The tests are mainly conducted on binary classification tasks, making it unclear if the findings can be generalized to other tasks."
1. Our primary motivation for using synthetic 2D classification datasets is their amenability to decision boundary visualization, which is central to our paper's focus as decision boundaries are typically studied in 2D space, and our proposed mechanism offers novel insights into the generalization abilities of in-context learning. NLP classification tasks, with their discrete token nature and variable input lengths, pose significant challenges for visualization of decision boundaries along x and y axes.
2. Our use of synthetic function datasets is firmly grounded in established practices for studying in-context learning. Several recent high-profile works have exclusively used synthetic data to investigate ICL/learning algos:
- NeurIPS 2022 paper (https://arxiv.org/pdf/2208.01066) focuses entirely on linear functions for ICL.
- ICLR 2023 paper (https://arxiv.org/pdf/2211.15661) explores learning algorithms for ICL and primarily conducts experiments on synthetic linear functions.
- Google's research (https://arxiv.org/abs/2303.03846) employs synthetic linear function classes (see Section 6).
- NeurIPS 2019 paper (https://arxiv.org/pdf/1905.11604) utilizes similar scikit-learn datasets to study decision boundaries for SDE algorithms (see Figure 1).
These papers demonstrate that our approach is quite prevalent in the field.
3. To extend beyond binary tasks, we conduct new experiments on 6 real-world NLP classification datasets. Note, to handle the high dimensionality of text embeddings, we projected them onto 2D space using t-sne. (Any dimensionality reduction technique will introduce confounders to the analysis, but this price is inevitable to extend our analysis). We experiment with a total of 6 widely-used NLP classification tasks, including both binary and multi-class settings. These include sentiment analysis and Textual Entailment Recognition, providing a broader perspective on the applicability of our approach. Our results, presented in **Figure 1 of the rebuttal PDF**, demonstrate similar non-smooth decision boundary characteristics in NLP tasks as observed in our synthetic datasets.
> Q2: "Although different model sizes are covered, these models are from different series. The reason for using these models should be clarified in the paper."
We use these LLMs to provide a comprehensive analysis across different sizes and architectures, representing current SoTA open-source LLMs. For the sizes, due to computational constraints in an academic setting and the expensive nature of querying decision boundaries with large grid points, we limited our analysis to models no larger than 13B parameters. Therefore to complete the size series and gain insights into smaller-scale models, we included the pruned llama 1.3B model.
> Q3: "The decision boundaries appear to be heavily influenced by the prompts, such as the design of labels and the order of in-context learning examples. ... explore the robustness of their results to variations in prompts, like different formats or synonym replacements?"
To address this concern, we have conducted experiments to explore how decision boundaries are affected by prompt formats, using 4 different synonyms for key terms. As shown in **Figure 2 of the rebuttal PDF**, we find that the LLM's decision boundary is indeed affected by the prompt format, which aligns with the importance of prompt engineering in ICL. However, the overall non-smoothness level of the decision boundaries remained consistent. We view prompt formats as another factor influencing the decision boundary rather than undermining our results, since our central observations regarding the non-smoothness of decision boundaries show similar patterns across different prompts. We will add this as another influencing factor in our paper.
> Q4: “Large Language Models are trained on vast corpora of text using unsupervised learning.” Shouldn't this be self-supervised learning?
Sorry for the typo! We will revise this.
> Q5: “For the open-source models, we use the approach described in 3.2 to get predictions. For the closed-source models, we use the next token generation as the prediction…” This is confusing since section 3.2 only mentions visualization methods. What are the differences in prompt design or experiments between these two approaches?
We apologize for the confusion. For both open and closed source models, we used the same prompts. For open source models, we used logits to get predictions for the class labels. For closed source models, we looked at the predictions since the logits were not available to us.
We hope this addresses your concerns. Please let us know if you have any questions.
---
Rebuttal Comment 1.1:
Title: Response to the author
Comment: Thank you. Considering this is the first study on decision boundaries in LLMs (as far as I know) and the experiments are solid, I have raised my score.
---
Reply to Comment 1.1.1:
Title: Thank you for raising score!
Comment: We thank the reviewer for raising the score and are pleased that you find our work novel and our experiments solid. | Summary: This paper investigates the decision boundary of in-context learning of Transformers. The paper shows that for three toy tasks, the decision boundaries of in-context learning of various pretrained models are not smooth. The paper then explores the method for improving the smoothness of decision boundaries and finds that supervised finetuning can mitigate this non-smoothness. Furthermore, finetuning on one task can benefit the smoothness of decision boundaries on other tasks.
Strengths: 1. The decision boundary of in-context learning is an interesting research topic, which seems not investigated by previous work.
2. The paper provides multiple empirical results and finds a method for improving the smoothness of decision boundary of in-context learning.
3. The writing of this paper is clear.
Weaknesses: 1. The empirical results are limited to toy datasets, which may reduce the impact of the proposed smoothness-improving method.
2. The paper does not empirically show that smoother decision boundary smoothness leads to higher generalization accuracy. On the contrary, the paper finds that gpt-4o has high accuracy along with not smooth decision boundary, which makes improving smoothness less motivated.
3. No quantitative evaluation of smoothness is present in the papers, making smoothness difficult to compare.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The main limitation is that the experiments are conducted only on toy datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer PktE,
Thank you for your feedback. We hope to address your questions and comments below.
> "The empirical results are limited to toy datasets, which may reduce the impact of the proposed smoothness-improving method."
1. Our primary motivation for using synthetic 2D classification datasets is their amenability to decision boundary visualization, which is central to our paper's focus as decision boundaries are typically studied in 2D space, and our proposed mechanism offers novel insights into the generalization abilities of in-context learning. NLP classification tasks, with their discrete token nature and variable input lengths, pose significant challenges for visualization of decision boundaries along x and y axes.
2. Our use of synthetic function datasets is firmly grounded in established practices for studying in-context learning. Several recent high-profile works have exclusively used synthetic data to investigate ICL/learning algos:
- NeurIPS 2022 paper (https://arxiv.org/pdf/2208.01066) focuses entirely on linear functions for ICL.
- ICLR 2023 paper (https://arxiv.org/pdf/2211.15661) explores learning algorithms for ICL and primarily conducts experiments on synthetic linear functions.
- Google's research (https://arxiv.org/abs/2303.03846) employs synthetic linear function classes (see Section 6).
- NeurIPS 2019 paper (https://arxiv.org/pdf/1905.11604) utilizes similar scikit-learn datasets to study decision boundaries for SDE algorithms (see Figure 1).
These papers demonstrate that our approach is quite prevalent in the field. The use of scikit-learn is simply a means to generate widely accepted and reproducible benchmarks with clear characteristics such as linear/non-linear patterns.
3. Following up on the reviewers’ suggestions, we include additional experiments on 6 high-dimensional real-world classification datasets. Note, to handle the high dimensionality of text embeddings, we projected them onto 2D space using t-sne. (Any dimensionality reduction technique will introduce confounders to the analysis, but this price is inevitable to extend our analysis). We experiment with a total of 6 widely-used NLP classification tasks, including both binary and multi-class settings. These include sentiment analysis and Textual Entailment Recognition, providing a broader perspective on the applicability of our approach. Our results, presented in **Figure 1 of the rebuttal PDF**, demonstrate similar non-smooth decision boundary characteristics in NLP tasks as observed in our synthetic datasets.
> "The paper does not empirically show that smoother decision boundary smoothness leads to higher generalization accuracy. On the contrary, the paper finds that gpt-4o has high accuracy along with not smooth decision boundary, which makes improving smoothness less motivated."
- While we do not conclusively link boundary smoothness to NLP task performance, this was not our primary aim. Our research aims to provide an initial exploration of decision boundary characteristics in in-context learning, a crucial yet understudied area for understanding ICL behavior.
- Following up on the reviewer’s suggestion, to investigate the benefits of smoother boundaries, we fine-tuned a Llama3 8B model on synthetic classification tasks and tested it on 7 diverse NLP datasets on ICL. As shown in the **Table 1 of the rebuttal pdf**, the results were promising: the fine-tuned model showed significantly higher performance on several tasks and improved average performance across seven tasks, suggesting smoother boundaries can contribute to better generalization in NLP tasks.
- The fact that GPT-4o's has high accuracy and a less smooth decision boundary does not negate the importance of decision boundary smoothness. Decision boundary characteristics often imply generalization to unseen data. GPT-4's high accuracy on current benchmarks does not necessarily imply good generalization, as its training data is not fully known. Different models may achieve high performance through various mechanisms, and smoothness could be beneficial, particularly for specific tasks that require high reliability, robustness, and interpretability.
> "No quantitative evaluation of smoothness is present in the papers, making smoothness difficult to compare."
- Formally measuring smoothness is challenging due to its dependence on grid discretization. Qualitatively, the difference in smoothness is evident when comparing traditional models (e.g., kNN, decision trees, logistic regression) to LLMs. To address this concern quantitatively, we explored several metrics, including curvature (average rate of change of the tangent vector along the decision boundary) and boundary length (total length of the decision boundary). However, measuring the continuity behavior of decision boundaries with discretized evaluations is non-trivial and these quantitative metrics are challenging to define and often do not show obvious trends.
- Nevertheless, we define and present an empirical metric for decision boundary smoothness: Nearest Neighbor Entropy. This measure calculates the average entropy of predictions among the k-nearest neighbors for each point in the grid, reflecting the variability and smoothness of the boundary.
- We show in **Table 2 of the rebuttal PDF** the NN entropy values for the models shown in Figure 1 of our submitted paper, comparing the smoothness of traditional models and state-of-the-art LLMs. LLMs exhibit higher NN entropy than models like Decision Trees and kNN. We also provide results for different LLMs with 64 and 128 in-context examples, demonstrating that as the number of examples increases, the entropy generally decreases. This quantitative approach, combined with our qualitative observations, provides a more comprehensive assessment of decision boundary smoothness.
We hope this addresses your concerns. Please let us know if you have any questions.
---
Rebuttal 2:
Comment: Dear Reviewer PktE,
Thank you again for your helpful feedback! Your suggestions have helped us solidify our experiments and analysis by:
1. Conducting additional experiments on NLP tasks spanning 6 NLP classification datasets,
2. Empirically demonstrating that smoothness on synthetic datasets can lead to improvements on downstream tasks, on average, across 7 ICL NLP tasks. These first two experiments address your main concern that "the main limitation is that the experiments are conducted only on toy datasets."
3. Using quantitative metrics, such as nearest neighbor entropy, as a measure of smoothness.
We hope this addresses the concerns you listed in your review. As the discussion period is nearing its end, could you please let us know if you have any further questions for us? We are happy to address any questions or concerns you may have.
Thank you for your time! | Summary: The authors study the decision boundaries of LLMs in binary in-context learning tasks, finding that decision boundaries can be non-smooth despite high test accuracy and linear separability of the task itself. They examine numerous methods for smoothing the decision boundaries, including SFT and training a transformer that intentionally learns a smooth decision boundary.
Strengths: Fascinating, insightful paper. Well-written, good visualizations.
Robustly evaluated across model families, model sizes, number of examples. Even examines the effects of quantization on the decision boundary, which might be one of the most important points in the paper (4-bit quantization affects the boundary quite a bit).
Includes sections on fine-tuning to smooth decision boundary as well as training from scratch to do so.
Great RW section, up to date with the most recent work.
No-brainer accept.
Weaknesses: None that I can see.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer WmaA,
Thank you for the positive and encouraging review! We are glad to find that you found our work to be insightful, well-written and robustly evaluated :) | Summary: This paper conducts a wide range of experiments exploring the smoothness of decision boundary generated by LLMs. Synthetic datasets from scikit-learn are used for the experiments, and experimental results are showing that various factors affect decision boundaries of LLMs.
Strengths: - There are analyses from diverse perspectives, in terms of model size, in-context examples, quantization, prompt format, example order. Extensive experimental results help interpret the targeted phenomenon. Moreover, changes when fine-tuning LLMs with in-context examples are observed.
Weaknesses: **[W1] Justification of task and dataset**:
- There have been works trying to understand the mechanism of in-context learning by conducting experiments on NLP classification tasks. However, it seems unclear why the authors choose scikit-learn-generated dataset. Is this dataset proper to investigate the effectiveness of in-context learning? In addition, can experimental results using this dataset be generalized to NLP classification tasks?
- Moreover, there are concerns on comparison groups. Traditional ML algorithms (e.g., Decition Tree, K-NN, etc.) map each datum into vector spaces, while LLMs map the entire context into vectors. Can comparing these two mechanism be regarded as a fair process?
**[W2] Analysis**: I have concerns that there are sets of experimental reporting without interpretations of probable reasons. There is a core question: How the smoothness of decision boundary helps understand in-context learning mechanism?
**(Update) After the author's rebuttal, most of these concerns are clarified, thus raising the score 3 to 5.**
Technical Quality: 2
Clarity: 3
Questions for Authors: - It would be better to define the concept of smoothness of decision boundry more formally since expecting a perfect smoothness of LLMs' decision boundary is rather unrealistic.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer PtXB,
Thank you for your feedback. We hope to address your questions and comments below.
> Q1: There have been works trying to understand the mechanism of in-context learning by conducting experiments on NLP classification tasks. However, it seems unclear why the authors choose scikit-learn-generated dataset. Is this dataset proper to investigate the effectiveness of in-context learning?
1. Our primary motivation for using synthetic 2D classification datasets is their amenability to decision boundary visualization, which is central to our paper's focus as decision boundaries are typically studied in 2D space, and our proposed mechanism offers novel insights into the generalization abilities of in-context learning. NLP classification tasks, with their discrete token nature and variable input lengths, pose significant challenges for visualization of decision boundaries along x and y axes.
2. Our use of synthetic function datasets is firmly grounded in established practices for studying in-context learning. Several recent high-profile works have exclusively used synthetic data to investigate ICL/learning algos:
- NeurIPS 2022 paper (https://arxiv.org/pdf/2208.01066) focuses entirely on linear functions for ICL.
- ICLR 2023 paper (https://arxiv.org/pdf/2211.15661) explores learning algorithms for ICL and primarily conducts experiments on synthetic linear functions.
- Google's research (https://arxiv.org/abs/2303.03846) employs synthetic linear function classes (see Section 6).
- NeurIPS 2019 paper (https://arxiv.org/pdf/1905.11604) utilizes similar scikit-learn datasets to study decision boundaries for SDE algorithms (see Figure 1).
These papers demonstrate that our approach is quite prevalent in the field. The use of scikit-learn is simply a means to generate widely accepted and reproducible benchmarks with clear characteristics such as linear/non-linear patterns.
3. Following up your suggestions, we include additional experiments on 6 nlp classification datasets. Note, to handle the high dimensionality of text embeddings, we projected them onto 2D space using t-sne. (Any dimensionality reduction technique will introduce confounders to the analysis, but this price is inevitable to extend our analysis). We experiment with a total of 6 widely-used NLP classification tasks, with both binary and multi-class settings. These include sentiment analysis and Textual Entailment Recognition. As shown in **Figure 1 of the rebuttal PDF**, we demonstrate similar non-smooth decision boundary characteristics in NLP tasks as observed in our synthetic datasets.
> Q2: there are concerns on comparison groups. Traditional ML algorithms (e.g., Decition Tree, K-NN) map each datum into vector spaces, while LLMs map the entire context into vectors. Can comparing these two mechanism be regarded as a fair process?
We view in-context learning in LLM as a learning algorithm and use decision boundary visualization as a tool to analyze the generalization ability. In this sense, ICL in LLM is comparable to the traditional classifiers and MLP. Related works have also studied ICL in LLM as learning algorithms. For example, Akyürek et al. proves transformers can implement learning algorithms for linear models based on GD and closed-form ridge regression.
> Q3: "There is a core question: How the smoothness of decision boundary helps understand in-context learning mechanism?" & "... can experimental results using this dataset be generalized to NLP classification tasks?"
- Our work offers novel insights into the ICL mechanism by analyzing decision boundary characteristics, an aspect previously unexplored. Unlike prior research focused on accuracy metrics, we examine the underlying decision-making process of LLMs during ICL. The smoothness of decision boundaries provides valuable information about the model's generalization capabilities, as every grid point (except the in-context examples) lies outside the “training set.”
- To further link decision boundary smoothness with NLP task performance, we fine-tuned a Llama3 8B model on synthetic tasks and tested it on 7 NLP ICL datasets. As shown in **Table 1 of the rebuttal PDF**, the fine-tuned model shows higher performance on several tasks and improved average performance across all tasks. These results suggest that smoother decision boundaries can contribute to generalization in NLP tasks.
> Q4: ... better to define the concept of smoothness of decision boundry more formally since expecting a perfect smoothness of LLMs' decision boundary is rather unrealistic.
- Formally measuring smoothness is challenging due to grid discretization. Qualitatively, the difference in smoothness is evident when comparing traditional models (e.g., kNN, DT) to LLMs. To address this concern quantitatively, we explored several metrics, including curvature (average rate of change of the tangent vector along the decision boundary) and boundary length. However, measuring the continuity behavior of decision boundaries with discretized grids is non-trivial and these metrics are challenging to define and often do not show obvious trends.
- Nevertheless, we define an empirical metric for decision boundary smoothness: Nearest Neighbor Entropy. This calculates the average entropy of predictions among the k-nearest neighbors for each grid point.
- We show in **Table 2 of the rebuttal PDF** the NN entropy for the models shown in Figure 1 of our submitted paper, comparing the smoothness of traditional models and sota LLMs. LLMs exhibit higher entropy than models like Decision Trees and kNN. We also provide results for different LLMs with 64 and 128 in-context examples, showing that as the number of examples increases, the entropy generally decreases. This quantitative approach, combined with our qualitative observations, provides a more comprehensive analysis of decision boundary.
We hope this addresses your concerns. Please let us know if you have any questions.
---
Rebuttal 2:
Comment: Dear Reviewer PtXB,
Thank you again for your helpful feedback! Your suggestions have helped us solidify our experiments and analysis by:
1. Conducting additional experiments on NLP tasks spanning 6 NLP classification datasets,
2. Demonstrating that smoothness on synthetic datasets can lead to improvements on downstream tasks, on average, across 7 ICL NLP tasks, and
3. Using quantitative metrics: nearest neighbor entropy, as a measure of smoothness.
We hope this addresses the concerns you listed in your review. As the discussion period is nearing its end, could you please let us know if you have any further questions for us? We are happy to address any questions or concerns you may have. Thank you for you time!
---
Rebuttal Comment 2.1:
Comment: Thank you for clarifying most of my concerns. I understand synthetic benchmarks are also used in this analytic field. Furthermore, I would appreciate for including NLP experiment results. It clearly helps my understanding.
I believe refining smoothness can be a key to understand and improve LLMs, but I'm still doubtful on the hypothesis that smoothness has a causal relationship with model performance by relying on observed case samples. Thus, I would like to raise my score from 3 to 5.
---
Reply to Comment 2.1.1:
Comment: Thank you for raising the score and for your valuable feedback! We appreciate your insights on smoothness and are glad the NLP experiments clarified your understanding. We will incorporate the analysis of these results into our paper. Thank you! | Rebuttal 1:
Rebuttal: To address the reviewer's concerns, we conducted additional experiments. Here is a summary:
- **NLP Multi-class Classification Tasks (Figure 1):**
We included additional experiments on six widely-used NLP classification datasets, addressing concerns about our use of toy datasets. Note, to handle the high dimensionality of text embeddings, we projected them onto 2D space using t-sne. (Any dimensionality reduction technique will introduce confounders to the analysis, but this price is inevitable to extend our analysis). We experiment with a total of 6 widely-used NLP classification tasks, with both binary and multi-class settings. These include sentiment analysis and Textual Entailment Recognition. As shown in Figure 1 of the rebuttal PDF, we demonstrate similar non-smooth decision boundary characteristics in NLP tasks as observed in our synthetic datasets.
- **Impact of Smoothness on NLP Performance (Table 1):**
Following up reviewer’s concerns in how decision boundary smoothness affect NLP task performance, we fine-tuned a Llama3 8B model on synthetic classification tasks and tested it on 7 diverse NLP datasets on ICL. As shown in the Table 1 of the rebuttal pdf, the results were promising: although only fine-tuned on synthetic classification dataset, the fine-tuned model showed significantly higher performance on several NLP ICL tasks and has an overall improved average performance across 7 tasks, suggesting smoother boundaries in synthetic tasks can contribute to better generalization in NLP tasks.
- **Quantifying Decision Boundary Smoothness (Table 2):**
Following up reviewer’s concerns on the lack of formal definition of smoothness of decision boundary in our setting. We define an empirical metric for decision boundary smoothness: Nearest Neighbor Entropy. This calculates the average entropy of predictions among the k-nearest neighbors for each grid point. Our analysis reveals that LLMs exhibit higher entropy than traditional models like Decision Trees and kNN. Furthermore, increasing the number of in-context examples generally decreases entropy, providing quantitative support for our qualitative observations. Apart from this metric, we also explored several other metrics, including curvature (average rate of change of the tangent vector along the decision boundary) and boundary length (total length of the decision boundary). However, measuring the continuity behavior of decision boundaries with discretized evaluations is non-trivial and these quantitative metrics are challenging to define and often do not show obvious trends.
- **Influence of Prompt Format (Figure 2):**
We find that the LLM's decision boundary is also affected by the prompt format, which aligns with the importance of prompt engineering in ICL. However, the overall non-smoothness level of the decision boundaries remained consistent. We view prompt formats as another factor influencing the decision boundary rather than undermining our results, since our central observations regarding the non-smoothness of decision boundaries show similar patterns across different prompts. We will add this as another influencing factor in our paper.
Pdf: /pdf/a3562e1aa13ae9c0e477099e8e24a0a0283fa156.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously | Accept (poster) | Summary: This paper studies the transferability of unlearnable examples (UEs) across different learning paradigms, i.e., supervised learning and self-supervised contrastive learning. Different from existing works including both supervised UE generation methods and hybrid methods, this paper argues that strong data augmentations with supervised learning are sufficient for unsupervised unlearnability. It then proposes two strong data augmentations equipped UE generation methods AUE (with standard training) and AAP (with adversarial training). The two proposed methods demonstrate certain superiority over the existing methods on low- and high-resolution datasets.
Strengths: 1. The proposed methods are extremely efficient than contrastive learning involving UE generation methods. It leverages supervised learning with strong data augmentation to achieve unsupervised unlearnability against contrastive learning.
2. The proposed method is simple but effective. The revealed finding that strong data augmentation is important for contrastive learning is interesting.
3. Extensive experiments confirmed the capability and efficiency of the proposed attacks.
Weaknesses: 1. The alignment and uniformity metrics are all from existing works, which lowers the novelty of the finding in Table 1: "contrastive unlearnability is highly related to huge alignment and uniformity gaps". And the two proposed methods have little to do with the two gaps?
2. While the finding that "supervised learning with strong data augmentation can achieve unsupervised unlearnability" is interesting, the two proposed methods have limited technical novelty. In other words, they do not explore the power of data augmentation more systematically. The current mechanism is very ad-hoc.
3. The performance difference (AUE vs AAP) between low- vs high-resolution datasets is intriguing. AAP targets adversarial training exploitation, so why it should perform better or worse than AUE against contrastive learning? Does contrastive learning have anything to do with adversarial training?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Should the worst-case unlearnability contain other learning paradigms rather than just supervised learning and contrastive learning? For example, MAE or autoregressive pretraining. We cannot enumerate all training paradigms.
2. Why data augmentation is so important for obtaining unsupervised unlearnability, what if there are non-data-augmention-based unsupervised learning methods?
--------
Post rebuttal: I have increased my rating according to the authors' responses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The technical novelty of this work is somewhat limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their detailed and thoughtful reviews!
We aim to resolve the issues highlighted and believe our answers will do so.
**[Weakness1] Alignment and uniformity gaps**
- Alignment and uniformity are widely adopted illustrations for the mechanism of contrastive learning. In our problem settings, although the final effectiveness of availability attacks is evaluated through classification accuracy, we still need to carefully examine how the attacks affect contrastive learning at the encoder level. Thus, we borrow the concepts of alignment and uniformity and introduce the gaps between them on clean and poisoned data respectively.
By studying the gaps, we explain why some attacks are effective against CL algorithms while others are not.
- Effective attacks cause significant differences between the features of poisoned samples and those of clean samples, ultimately rendering the learned classifier ineffective on the ground truth distribution.
- Regarding our proposed methods, Figure 5(b) demonstrates that AUE exhibits significant alignment and uniformity gaps, whereas unenhanced UE does not, as shown in Table 1.
**[W2] Systematical study on data augmentation** The review said our proposed method is ad-hoc. We believe some misunderstandings need to be clarified.
- *[Technique novelty]* To obtain unlearnability for CL, we are the first to adopt contrastive augmentation in the SL-based training process. On this topic, we have moved away from the path dependence on contrastive error minimization.
- *[Theoretical results]* We provide a toy model in the appendix to theoretically verify the intuition of our method (Proposition E.1): if the supervised training process uses the same data augmentation as CL, then this process also optimizes the contrastive loss.
- *[Augmentation strategies]* In the discussion with Reviewer 5WZD, we explore more data augmentation strategies in our proposed attacks, including augmentation decoupling, augmentation differentiability, and dynamic strategies.
**[W3] AAP better or worse than AUE**
- On the one hand, when the dataset becomes higher-resolution and more category, the optimization difficulty in generating adversarial poisoning from scratch increases since AP finds non-robust features that rely on a good classifier.
- On the other hand, since supervised error minimization is easy to optimize, the high resolution provides more room for AUE to create effective poisoning patterns to fool the subsequent algorithms.
**[W4] Contrastive learning and adversarial training**
If we understand correctly, the reviewer's question is why adversarial poisoning (AP) is effective for contrastive learning. We have discussed it in Appendix D.3 of our paper. On the one hand, [1] shows that some contrastive augmentation can squeeze low-frequency "shortcuts" in unlearnable examples.
On the other hand, contrastive error minimization-based attacks such as CP and TP generate high-frequency perturbations (see Figure 8 in our paper).
From the visualization, perturbations from AP contain more high-frequency patterns, which may deceive contrastive learning more easily.
[1] Liu Z, et al. Image shortcut squeezing: Countering perturbative availability poisons with compression. ICML 2023.
**[Q1] Learning paradigms other than contrastive learning**
- In Table 5 of our paper, we have checked the supervised contrastive learning (SupCL) and a semi-supervised method FixMatch.
- We evaluate our attacks against MAE by end-to-end fine-tuning on CIFAR-10. In the presence of AUE/AAP, the test accuracy of fine-tuning decreases by 57%/37%. In Figure 2 of the additional document, the training accuracy in the presence of attacks increases more quickly than that in the clean case, while the test accuracy is the opposite, meaning that the attacks create shortcuts for MAE.
Although our methods were not specifically designed for MAE, they can transfer to MAE.
|Clean | AUE | AAP |
| --- | --- | --- |
| 89.63 | 32.36 | 51.93 |
**[Q2] Importance of data augmentation.**
- Data augmentation is an essential component for CL since it creates different views of the same data point, which is fundamental for learning useful representations. Our work leverages stronger data augmentation to improve the effectiveness of SL-based attacks against CL.
- As other reviewers pointed out, there is a special case in CL, i.e., Text-Image contrastive learning algorithm CLIP. The image encoder of CLIP does not involve contrastive augmentation. We conduct experiments on linear probing upon the CLIP encoder and show that our attacks take effect on this algorithm. See our discussion with Reviewer 61jU for detailed results.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: Thanks. Most of my concerns have been addressed, thus I increase my rating to 6. | Summary: This paper explores efficient availability attacks that target both supervised and contrastive learning algorithms simultaneously. The authors propose two attack methods, named AUE (Augmented Unlearnable Examples) and AAP (Augmented Adversarial Poisoning), which utilize enhanced data augmentations to craft effective poisoning attacks. These methods achieve state-of-the-art unlearnability with reduced computational demands, showing promise for practical applications. The paper evaluates the effectiveness of these attacks across multiple datasets and demonstrates their superiority over existing methods in achieving worst-case unlearnability while handling high-resolution datasets efficiently.
Strengths: - The motivation behind the study is sound, as it addresses an important gap in existing methods, which are unable to achieve unlearnability for both supervised and contrastive learning. This realization underscores the need to explore techniques that can simultaneously achieve supervised and contrastive unlearnability.
- The two proposed attack methods, AAP (Augmented Adversarial Poisoning) and AUE (Augmented Unlearnable Examples), are presented as simple yet effective.
- The experiments conducted are comprehensive and robust.
Weaknesses: - While the motivation is justified, the technical contributions could be considered direct. Integrating data augmentation into the generation of poisons is a straightforward approach, which may not reflect a non-trivial technical contribution.
- The methods lack theoretical analysis, which raises questions about the guaranteed effectiveness of data augmentation in all scenarios. Additionally, there is no explanation of why data augmentation works effectively or if it completely addresses the issue at hand through augmentation alone.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the biggest challenge in this work?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their valuable comments and suggestions! We hope our responses sufficiently address the concerns raised.
**[Weakness1] Theoretical guarantees and why the enhancing data augmentation works.**
- In fact, we conduct theoretical analysis for a toy model in the appendix to verify the intuition of our proposed method (**Proposition E.1**): if the supervised training process uses the same data augmentation as CL, then this process also optimizes an upper bound of the contrastive loss. As a result, supervised error minimization mimics contrastive error minimization. Such mimicry makes the modified SL-based generation process enhance the contrastive unlearnability of final perturbations.
- By the way, we agree with the reviewer that availability attacks currently lack theoretical guarantees similar to those provided by certified adversarial robustness or differential privacy. Although some works have clarified the optimization objectives of such attacks, it is still insufficient to guarantee the final achievement of unlearnability.
**[W2] Technical contribution** To obtain unlearnability for CL, we are the first to adopt contrastive augmentation in the SL-based training process. The proposed method is simple, effective, and efficient. Compared to other more complex algorithms, it is easier to scale to large datasets and has greater potential for real-world applications.
**[Question] Biggest challenge**
In our opinion, the biggest challenge in this work is breaking free from the reliance on existing paths. Current methods on this topic, including CP, TUE, and TP, are all based on the contrastive training process which is time-consuming and meets challenges in scaling up to real-world datasets. We believe it’s possible to obtain effective availability attacks through the supervised training process which is more efficient and easier to optimize. The remaining task is to consider how to connect the characteristics of contrastive learning and supervised learning and propose an effective algorithm.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer J4Hn,
We have submitted a rebuttal addressing your comments. Could you please review it and confirm if your concerns have been resolved?
Best regards,\
Authors
---
Rebuttal 2:
Comment: I appreciate the authors' response. My concerns regarding the theoretical analysis and technical contributions have been mostly addressed. I would like to raise my score accordingly. | Summary: This paper claims to introduce a threat scenario where the adversary may use contrastive learning to bypass the unlearnable examples that are crafted only for supervised learning. In this threat model, the authors showcase that previous works on unlearnable examples crafted only for supervised learning may fall short of contrastive learning. Based on this observation, the authors propose a new approach that considers both supervised and contrastive learning when generating unlearnable examples. The proposed approach is also efficient due to the augmentation adopted augmentation strategies. Extensive experiments show that the proposed method is effective in this new threat scenario.
Strengths: The paper is clearly presented. The proposed approach is efficient thanks to the augmentation strategies that imitate the CL. As a consequence, AUE and AAP are also easily scalable to practical datasets, like ImageNet subsets.
The authors provided a very complete analysis to evaluate the proposed methods, including discussions on limitations and performance under different defenses. It provides a relatively complete understanding of the proposed approaches in different threat scenarios.
Weaknesses: The introduced topic is interesting but also limited. The authors point out that there is a lack of unlearnable approaches that are effective against both supervised learning (SL) and contrastive learning (CL). This observation is not fully novel, since CP, TP, and TUE have similar observations and are effective in most cases against both types of learning. For example, CP is an approach that is designed for contrastive learning but is also effective against supervised learning. I would suggest that the author could tone down the claims that hint at this observation as a novel finding.
The main contribution of this work is to employ both SL and CL losses in the generation of unlearnable examples, including augmentations that imitate the CL in the SL scenario. The proposed approaches show better average and worst-case performance, but the improvement is incremental, according to Table 3 and Table 4. I would suggest that the authors could provide a deeper and more detailed analysis of the difference between SL and CL unlearnable examples by conducting more ablation studies on the augmentation strategies. The working mechanism behind the augmentation strategies is of interest to the research community.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please motivate the threat scenario where the proposed unlearnable examples are necessary. Further analysis of the augmentation strategies could be clarified.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed most limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback and valuable suggestions! We hope that our clarifications address your queries and concerns effectively.
**[Weakness1] Topic novelty and previous works**
- As the reviewer mentioned, the CP and TUE papers were the first to point out the vulnerability of unlearnable examples to contrastive learning.
- However, we notice that CL-based attacks are time-consuming and suffer the vulnerability of algorithm transferability, for example, CP is not consistently effective for SL and TP's performance decreases against different evaluation CL algorithms (see Section 5.2 in our paper).
- Regarding the challenges in efficiency and effectiveness, our work provides two SL-based attacks that achieve both SL and CL unlearnability.
- In summary, we sincerely accept your suggestions and will adopt a more modest tone in the abstract and introduction versions and will clarify our contributions in the subsequent.
**[W2] Augmentation strategies** We indeed conducted more ablation studies on augmentation strategies. Here are our results.
- **(1)Dynamic augmentation scheme.** In the paper, we implement the proposed attacks with a constant augmentation strength. We also attempt dynamic augmentation schemes including annealing and tempering which linearly change augmentation strength during the perturbation generation process. In Figure 1 of the additional document, the annealing scheme achieves better supervised unlearnability than the tempering scheme; regarding contrastive unlearnability, annealing outperforms tempering for small final strengths and just the opposite for large final strengths. Compared with the constant scheme in the paper, the tempering scheme with a particular final strength is marginally better, while other options are worse. In summary, these two dynamic schemes do not outperform the default constant scheme used in the paper.
- **(2) Applying perturbation before/after augmentation**
Another key factor in the effectiveness of our method is that perturbation should be added **before** augmentation. In the perturbation generation process, if the perturbation is added before augmentation $\mu$, then its gradient propagates through $\mu$, i.e., $\nabla_\delta f(\mu(x+\delta))$; otherwise, its gradient does not go through $\mu$, i.e., $\nabla_\delta f(\mu(x)+\delta)$.
We conduct an ablation study of applying perturbation **after** augmentation for AUE in Figure 3 of the additional document. In that case, enhancing augmentation strength does not improve the effectiveness against contrastive learning.
**[Question1] Scenario**
Let’s continue the scenario described in [1]: Given that facial recognition tools can be trained by scraping people’s photos from the public web, can people still feel secure uploading their selfies to social media?
Availability attacks (unlearnable examples) provide a tool that allows AI systems to ignore a person’s selfies; however, this is limited to AI systems based on supervised learning algorithms.
Considering contrastive learning algorithms can currently achieve performance comparable to that of supervised learning, once AI systems begin to use contrastive learning, the protection of selfies collapses.
This is why the effectiveness of attacks against contrastive learning in real-world scenarios is important.
Our work provides an efficient method that is effective for both supervised learning and contrastive learning.
[1] Will Douglas. How to stop AI from recognizing your face in selfies. MIT Technology Review. https://www.technologyreview.com/2021/05/05/1024613/stop-ai-recognizing-your-face-selfies-machine-learning-facial-recognition-clearview/
**[Q2] Augmentation strategies** See our reply to [W2]
---
Rebuttal Comment 1.1:
Comment: Thank the authors for your detailed clarifications and additional analysis. I appreciate all of your efforts. I still have one concern regarding the "enhanced augmentations". In Figure 1 (b) of the rebuttal, does it suggest that both annealing and tempering are not critical for the unlearnable CL? Could you please further motivate your choices? To make it more specific, in line209 of the draft: "In other words, supervised error-minimizing noises with enhanced data augmentations can partially replace the functionality of contrastive error-minimizing noises to deceive contrastive learning." Could you further clarify the enhanced augmentation and motivate it?
---
Reply to Comment 1.1.1:
Title: Further clarification
Comment: Thanks for your response to our rebuttal! We will further clarify our method and the motivation of it in detail.
**[1] Motivation of augmentation enhancement**
To make clear our motivation, we first revisit the mechanism of error-minimization and then clarify the motivation and the statement in Line 209.
- **Mechanism of error-minimization** In Equations (2,4) of our paper, error-minimization refers to a min-min optimization of an objective (SL or CL) loss with respect to weights and perturbations. It aims to generate perturbed data that can be extremely easily optimized by the cross-entropy loss for SL or by InfoNCE loss for CL.
As a result, this characteristic of poisoned data, which makes the loss function converge easily, leads to a model that only learns the fraudulent synthetic patterns from perturbations and fails to learn the true ground-truth data distribution during training. For example, in Figure 7 of our paper, we closely examine the training process on poisoned data and find it converging rapidly.
- **Motivation of our method and the statement in Line 209**
To obtain unlearnability for CL, the existing path is contrastive error-minimization (CP, TUE, TP) based on the CL training process.
In this paper, we aim to provide a solution based on the SL training process which is more efficient.
Recall that Equation (2) differs from Equation (4) with the loss function choice, say, SL loss vs. CL loss. Our **key insight** is, that using contrastive augmentation in SL loss and optimizing it can implicitly reduce the CL loss to some extent, as shown in Figure 4. (More empirical observations and theoretical analysis about this have been discussed in Section 4.1 and Appendix E.)
Thus, for our proposed AUE attack, while it appears to be performing supervised error minimization, it is also carrying out contrastive error minimization.
That is, as stated in **Line 209**, the enhanced data augmentation allows supervised error-minimization to partially serve the role of contrastive error-minimization. We verify this claim in Figure 5(a) of our paper, in which the InfoNCE loss during training on AUE poisoned data is largely reduced, achieving a similar effect of contrastive error-minimization.
**[2] Implementation schemes of augmentation in our attacks**
- **Augmentations in SL and CL** CL typically uses more and stronger data augmentations compared to SL. Specifically, SL generally relies on simpler augmentations like cropping and flipping, whereas contrastive learning incorporates more advanced techniques such as color jittering, grayscale, and others.
SL-based attacks typically leverage the mild data augmentation used in supervised learning.
- **Default constant strength scheme**
To enhance the contrastive unlearnability of SL-based attacks, we replace the data augmentation with contrastive augmentation and adjust the intensity of augmentation through a strength parameter, as shown in Appendix C.2, Code Listing 1.
By default, we fix the augmentation strength as a constant value in the generation of AUE and AAP attacks, as discussed in Appendix D.4.
Comprehensive experiments show that our attacks are both effective and efficient against SL and CL simultaneously.
- **Annealing and tempering schemes**
In the previous rebuttal, besides the constant augmentation scheme, we also try annealing and tempering schemes. These two dynamic choices are inspired by [1], which proposes that annealing down augmentation strength is beneficial for adversarial contrastive learning. We want to see if dynamic schemes also benefit our problem.
As shown in Figure 1 of our rebuttal, on CIFAR-10, the improvement of a particularly selected alternative scheme is marginal compared to the default one. Therefore, we believe that the **decisive factor** in our method is the contrastive augmentation itself, rather than the choice between constant and dynamic schemes for the augmentation.
[1] Luo R, Wang Y, Wang Y. Rethinking the effect of data augmentation in adversarial contrastive learning. ICLR 2023.
We hope our reply makes things more clear. Feel free to let us know if you still have concerns. | Summary: This paper aims to address the issue of effectively conducting availability attacks on both supervised learning (SL) and contrastive learning (CL) algorithms. Specifically, the paper highlights that existing methods fail to simultaneously achieve "unlearnability" for both SL and CL, posing risks to data protection. To tackle this challenge, the paper proposes a novel approach that employs contrastive-like data augmentation within the supervised learning framework to execute effective attacks on both SL and CL.
Strengths: 1. The proposed methods achieve better efficiency and Pareto improvement for both the SL and CL tasks.
2. the first work that demonstrates adding contrastive augmentation resilience with an SL-based surrogate can fool CL-based models.
Weaknesses: 1. The proposed AUE and AAP are all brittle under diffusion-based purification, which limits the practical usage of unlearnable examples in the real world.
2. Lack of discussion and comparison with recent work [1] that leverages CLIP latent as guidance for crafting transferable and label-agnostic perturbation, which could potentially achieve efficient perturbation crafting for both SL and CL tasks.
3. The results in Table 3 show that the proposed methods did not consistently outperform other methods under different SL and CL settings, which makes the stability of improvement questionable.
4. This work essentially works by crafting perturbation that resilience to augmentations for both supervised learning and contrastive learning, which is a conceptually simple extension of the transformation augmentation technique in REM [2]. One of the fundamental limitations of such a direction is that unauthorized trainer might leverage stronger transformations like super-resolution or diffusion-based augmentation [3] in their training pipeline.
Typos: Line 20: Change "particular, Huang et al. [20] reduces" to "particular, Huang et al. [20] reduced."
Ref.
[1]. One for All: A Universal Generator for Concept Unlearnability via Multi-Modal Alignment, ICML'24
[2]. Robust Unlearnable Examples: Protecting Data Against Adversarial Learning, ICLR'22'
[3]. DiffAug: Enhance Unsupervised Contrastive Learning with Domain-Knowledge-Free Diffusion-based Data Augmentation, ICML'24
Technical Quality: 3
Clarity: 3
Questions for Authors: Nowadays, one of the most popular ways to build classifiers is to leverage CLIP as a feature encoder with or without additional head to do classification. Is the perturbation crafted from AUE and AAP transferable to this setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their thorough and constructive feedback! We aim to clarify and address your concerns through our detailed replies.
**[Weakness1] Comparison with “One for All (14A)”** We believe that "14A" and our work are more orthogonal in nature and will explain it in detail.
- The biggest difference between our method and “14A” is that we are studying different types of transferability. Specifically, “14A” studies **cross-dataset transferability**, meaning that the noise generator should produce UEs effective against supervised learning on different datasets. However, our paper investigates **cross-algorithm transferability**, meaning the generated UEs should be effective not only against SL but also against CL on the same dataset.
- “14A” does not claim to work for CL, nor does it present any experimental results related to CL. Meanwhile, we don't claim that our attacks transfer well across datasets since they are based on dataset-specific optimization instead of a noise generator and directly applying off-the-shell perturbations to a different dataset is much more difficult and less realistic.
**[W2] Non-consistent advantage** The reviewer mentioned that our proposed attacks do not consistently outperform baseline methods against different CL algorithms.
We believe this phenomenon is **not a weakness** of our method; rather, it reflects the vulnerability of CL-based methods including CP, TUE, and TP, in terms of cross-algorithm transferability.
- CL-based attacks show varying performance against different CL algorithms (see Table 11 in our paper). For example, since TP is generated using SimCLR, it achieves rather low accuracy against SimCLR evaluation, i.e., 6.7% for CIFAR-100, while our AUE achieves 13.6%. However, when facing a different evaluation algorithm, saying BYOL, TP’s accuracy drastically increases to 27%, which is higher than AUE’s 19.2%.
- In contrast, our SL-based attacks demonstrate relatively stable performance against different CL algorithms and surpass CL-based baseline methods in worst-case unlearnability.
**[W3] REM's augmentation** The reviewer mentioned that our technique is an extension of the technique used by REM. We believe there are some misunderstandings here and we will elaborate below.
- “Expectation over transformation (EOT)” was proposed to make robust adversarial examples in the physical world[1], for example, a 3D-printed "adversarial" turtle that can be classified as a rifle from different views.
- Then REM uses a modified EOT in which the transformation distribution only contains Crop and Horizontal Flip.
Note that these transformations are common in standard supervised learning and not exclusive to adversarial training.
The ablation study empirically shows EOT can improve the performance of REM against adversarial training.
- We want to clarify that our method is **not an EOT variant** at all since it does not involve taking expectations.
Moreover, our SL-based method contains contrastive augmentation that does not appear in SL.
We leverage contrastive augmentation since it is a fundamental component of CL and our goal is to achieve unlearnability for CL.
- In summary, in terms of both motivation and technical details, our method is not an extension of EOT, but a technique specifically designed to address a particular problem.
[1] Athalye A, et al. Synthesizing robust adversarial examples. ICML 2018
**[W5] Diffusion purification**
- We acknowledge that at this stage of development, diffusion-based purification techniques have impacted all methods that aim to protect images through subtle perturbations, not only the method we studied, which uses availability attacks to prevent unauthorized data usage, but also methods that protect the copyrights of artists’ works [2,3,4].
- In Appendix D.9 of our paper, we discussed these techniques. We consider the defensive capability of diffusion purification as a limitation of this work, and exploring how to overcome this limitation is a promising direction for future research, such as incorporating the diffusion process into perturbation generation.
- Since the code in DiffAug's GitHub repository does not include implementations for commonly used datasets such as CIFAR and ImageNet, we needed to spend additional time modifying the code to adapt it to our problem.
Once the results are available, we will update them in a subsequent version.
[2] Shan S, et al. Glaze: Protecting artists from style mimicry by {Text-to-Image} models. USENIX Security 2023
[3] Liang C, et al. Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples. ICML 2023
[4] Hönig R, et al. Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI.arXiv:2406.12027
**[W6] Typo** Thank you for pointing out it. We will correct it in the next version.
**[Question] CLIP** We conduct linear probing upon CLIP on CIFAR-10 following the official example in the GitHub repository. The CLIP encoder is pre-trained by OpenAI and fixed.
- The following results show that both AUE and AAP can make CLIP-extracted representations of training data deviate from the true representation distribution. Compared to AUE, AAP achieves more unlearnability in such scenarios.
||Clean|AUE|AAP|
|-|-|-|-|
|CIFAR-10|94.99|90.63|51.22|
|CIFAR-100|80.00|72.56|66.82|
- The reason for this could be that, CLIP exhibits adversarial vulnerability, making it easy to find pixel-level perturbations that alter feature semantics [5]. Although our attacks were not designed specifically for CLIP, the perturbations they generate might share some commonalities with CLIP’s adversarial examples. Compared to AUE, AAP, as an adversarial example for SL, possibly plays a greater role in confusing the feature semantics extracted by CLIP.
[5] Fort S. Adversarial examples for the OpenAI CLIP in its zero-shot classification regime and their semantic generalization
---
Rebuttal Comment 1.1:
Title: Thank you for your detailed response
Comment: Thank you for your detailed response. Your response addresses most of my concerns. I will raise my score. But I am still looking forward to seeing the perturbation against DiffAug experiments, since, as you mention, it's a fundamental challenge in this field.
---
Reply to Comment 1.1.1:
Title: Discussion about DiffAug
Comment: Thank you for acknowledging our work.
We will address your concern about DiffAug through additional experimental results.
Since DiffAug's public repository only contains code for biology datasets, we sent emails to the authors requesting official implementations for vision tasks. Unfortunately, we have not received a response so far.
Consequently, based on the paper and the existing code, we did our best to reproduce DiffAug on CIFAR-10. We used a ResNet-18 as the encoder backbone and a UNet as the diffusion backbone.
We train the DiffAug on clean/poisoned training data and then perform linear probing upon the encoder. The following table shows the test accuracy. **Our attacks successfully transfer to DiffAug**. Specifically, compared to the non-attack case, AUE and AAP attacks reduce the test accuracy by 46.44% and 50.48% respectively.
| Clean | AUE | AAP |
|-------|-----|-----|
| 81.88 |35.44|31.40|
In summary, although DiffAug incorporates diffusion-based data augmentation in contrastive learning, our proposed attacks are still effective against it. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful consideration and valuable feedback!
In our responses to each reviewer, we have clarified and addressed the weaknesses and issues raised, including many additional supplementary experiments.
Due to the word limit of the rebuttal and the design of the review system, we were unable to fully present the information from the tables and figures in our responses.
Here, we have consolidated the charts and tables from the supplementary experiments into an additional document and uploaded it.
- **Figure 1** Based on Reviewer 5WZD’s suggestion, we conducted more ablation studies on augmentation strategies. Figure 1 shows the experimental results of using a dynamic augmentation scheme for AUE.
- **Figure 2** Based on Reviewer 2’s suggestion, we conducted experiments with MAE. Figure 2 shows the training process of MAE fine-tuning under attacks.
- **Figure 3** The results of applying perturbations after augmentation for AUE, which is also an ablation study on augmentation strategies.
- **Table 1** To address Reviewer 3’s questions, we tested the effectiveness of our attacks against CLIP and showed results in Table 1.
- **Table 2** The performance of our attacks against MAE.
Pdf: /pdf/35c77fa68ca09cda05fabdb9288937b69b0e8efc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Euclidean distance compression via deep random features | Accept (poster) | Summary: The paper focuses on constructing sketches of point sets via (compositions of) random maps $\varphi_l$ into the discrete cube $N^{-\frac{1}{2}}\{-1,1\}^N$, and describes how to get an estimate of the squared Euclidean distance. The paper explains how the maps $\varphi_l$ are constructed and motivate the choice based on properties of the functions $f, g$ , and has detailed proofs bounding the error of the sketch based on $l$. The paper discusses the limitations of $\varphi_l$, i.e. there is an additive $\epsilon\|x - y\|^{2-2^{1-l}}$ error ; hence for $\|x - y\| > 1$, $\varphi_l, l > 1$ may not be optimal. There are experiments with simulated data to show how $\epsilon$ varies, and how $l = 1, 2$ performs for nearest neighbor search, as well as nearest neighbor search with the RCV1 dataset. Based on the experiments, the paper summarizes conditions in when $l = 1$ and $l = 2$ should be used.
Strengths: - The sketching algorithm outlined here stores data as bits (on the discrete cube $\{-1,1\}^N$) (scaling factor can be applied after storage) ; this saves on space (compared to a sketching algorithm that stores data as doubles or floats)
- Treating $\varphi^D$ as the random map, the idea of applying $\varphi^D$ repeatedly, and finding an appropriate ``inverse" to recover the Euclidean distance is a nifty idea.
- The paper is extremely clear, motivating random projection and sketching, as well as giving detailed proofs and explaining why each step was taking for w.h.p. bounds on the error of the sketch.
To summarize, I feel the strengths of this paper are putting together several ideas: linking the derivative of the function $g \equiv \sin(\pi/2 t)$ to bounding the additive error of the approximation, and carefully explaining what happens if $\varphi^D$ is applied multiple times.
Weaknesses: - The one layer map $\varphi_l$ (lines 111-113), if I am not mistaken, comes from the original sign random projections in [6] (Section 3, Random Hyperplane Based Hash Functions for Vectors). While this was originally used to estimate angles, Li et al (Section 4, Sign Random Projections) looked at estimating $a$ (which is $\langle x,y \rangle$ in the notation of this paper) using $\arccos(..)$, given that the margins are known / data is normalized (which is equivalent to points being on $S^{d-1}$ in the notation of this paper). By the polarization identity, $\|x-y\|$ can be recovered directly from $\langle x,y\rangle$. The experiments run generally show that the one layer map is better than the two layer map, except in cases where $\|x - y\| \leq 0.06$ (line 317), but this seems that the two layer map is only useful in very niche cases. It would be good if there is some reference to sign random projections when referring to the one layer map. Unfortunately, this means any novelty would be for the $l$ layer map, $l \geq 2$.
- The plots in Figure 1 show that $\epsilon$ is generally larger than $\|x - y\|$. Moreover (line 317), the two layer map performs better when the Euclidean distance $\|x-y\| \leq 0.06$, which corresponds to $\theta_{x,y} < 0.06$. I am not exactly sure how realistic a two layer map would be. While it may accurately recover the true nearest neighbor, there may also be false positives, i.e. a point with a farther Euclidean distance but have an estimate that is "closer".
- I found it difficult to replicate the experiment in Figure 3 ; since a quick Google search found the text dataset (not pre-processed into vectors). Moreover, I found the "first project the data with a Gaussian random matrix to $\mathbb R^{5000}$" a bit puzzling (e.g. are the true nearest neighbors the original neighbors, or the true nearest neighbors after projecting with a Gaussian random matrix?)
- I note that computing the second layer map took up substantially more time than just the first layer map (even with $D_1 = 6000, D_2 = 1000$), so I am not sure if the increase in computing time is worth the gain in less error.
References:
Li et al: Improving Random Projections Using Marginal Information, COLT 2006
Technical Quality: 3
Clarity: 4
Questions for Authors: - Li et al (Section 4) states "In fact, when $\theta$ is close to $0$ or $\pi$, due to the high nonlinearity, the asymptotic variance formula is not reliable." Can the results of Theorem 6 (in this paper) show that if ``sign random projections" were applied again, an angle $\theta$ close to $0$ would have a lower variance (since a small $\theta$ implies a small Euclidean distance)?
- I am uncomfortable with line 472-473, "Since $g_l(f_l(t)) = t$ ..., $g_l(\langle\varphi_l(x), \varphi_l(y)\rangle)$ should be a good approximation of $\langle x, y\rangle$". I agree that $\mathbb E{\langle \varphi^D(x), \varphi^D(y)\rangle} = f(\langle x,y\rangle)$, and also that $g(\mathbb E{\langle \varphi^D(x), \varphi^D(y)\rangle}) = g(f(\langle x,y\rangle)) = \langle x, y \rangle$. But as the goal is to approximate $\langle x,y\rangle$ (and from it the Euclidean distance), $\mathbb E{g({\langle \varphi^D(x), \varphi^D(y)\rangle})} \neq g(\mathbb E{\langle \varphi^D(x), \varphi^D(y)\rangle})$. Hence I am not sure if the bounds are as accurate (here, I am thinking of Taylor expansions of $\mathbb E{g({\langle \varphi^D(x), \varphi^D(y)\rangle})}$, and bounding remainder terms). I might be wrong, and happy to be corrected.
I have tried $\|x-y\| = 0.03$ with $l = 2$ in a similar experiment in Section 4.1, lines 304-325, and I do see the two layer map outperforming the one layer map, but the three layer map having poor performance compared to the two layer map. However, with a lower $\|x-y\| = 0.0003$, I see that the three layer map outperforms both the two layer map and the one layer map (using the choice of 50000->6000->1000), so the general idea is still right.
- Are the true nearest neighbors the original neighbors, or the true nearest neighbors after projecting with a Gaussian random matrix for the experiments in Fig 3?
- The left plot in Fig1 might be more informative when $\epsilon$s are plotted within the unit circle in order to see comparisons for the one layer and two layer map for some fixed values of output dimension. E.g., plot the corresponding $\epsilon$s for the one layer map / two layer map on the "line" parameterized by $(a, \sqrt{1-a^2})$ as $a$ varies, so a reader can visually see the region where the one layer map performs better, and the two layer map performs better. Admittedly, this is only useful in 2D when it is easy to "convert visually" from angles to Euclidean distances.
- A conclusion (or discussion section) that summarizes the ideas in Section 1 would be good as well - although I think this can be done by placing some parts of Section 1 at the end of the paper.
References:
Li et al: Improving Random Projections Using Marginal Information, COLT 2006
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors address the main limitation when $\varphi_l, l \geq 2$ performs better.
I like the ideas in this paper, and future work on sketching algorithm with quantization steps could potentially build on this, but the experimental results and discussion could be more convincing.
My score is motivated by the unit circle mentioned in the questions (comparing when a one layer map is better than two layer map) which shows a small "slice" where the two layer map is better. Broadly speaking, I cannot think of any application where it is desirable to have good estimates for points extremely close to each other, yet ensure false positives do not occur (i.e. that "slice" where the Euclidean distance is $< 0.06$). I am happy to raise my score if there are potential applications, experiments or discussion (niche but realistic cases are okay).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness 1: We agree and we only claim novelty in the case $l \geq 2$; versions of the 1-layer map have been discussed in several papers. We will clarify the relationship with [6] (Charikar) and [Li et al, COLT 2006] when the maps are introduced. See overall author rebuttal for a discussion of the "niche" aspect.
Weakness 2: We agree that there can be incorrect nearest neighbors after approximation. This issue seems to be inherent to any multiplicative approximation applied to nearest neighbors. Namely, given a query point and several points at nearly the same distance of the query point, any point could be returned as an approximate nearest neighbor. At the same time, because out theoretical guarantees are for multiplicative (and not just additive) error, some types of "false positives" are not possible. For example, if x is the query point and y is it's nearest neighbor at distance .05, and there is a third point z at distance 1/2 from x, it is not possible for z to be misclassified as the nearest neighbor as long as the sketch succeeds.
Weakness 3: The RCV1 dataset is available (in normalized vector form) in scikit-learn with the command from sklearn.datasets import fetch_rcv1. We also sent our code to the AC so that should be available now. Concerning the projection step, it is just for computational savings. The "true nearest neighbors" are the nearest neighbors after projection. To make the experiment simpler and more convincing, we rerun the experiment without projecting and obtained essentially the same result (see the attached pdf). We will update the experiment in the paper with the version without projection.
Question 1: Our analysis shows that the additive error is small with high probability for multilayer maps and this suggests that the variance is smaller, but we do not have a formal argument for the variance.
Question 2: The "good approximation" is in the sense of a small additive error with high probability. We do not need the approximation to be "unbiased." Namely, we are using $\langle \varphi_\ell(x),\varphi_\ell(y)\rangle$ as an estimator of $f_\ell(\langle x,y \rangle )$. But we are not claiming that $E \langle \varphi_\ell(x),\varphi_\ell(y)\rangle $ is equal to $f_\ell(\langle x,y \rangle )$. However, one of our main results (Theorem 5) shows that it is a good estimator in an additive sense as long as dimensions are large. This "additive approximation" interpretation is clarified in the next sentence (473-474).
Question 3: The "true nearest neighbors" are the nearest neighbors after projection.
Question 4: From Figure 1, we see that the distance threshold at which the 2-layer map becomes better than the 1-layer map is approximately $\|x-y\| = .06$. This corresponds to $\langle x,y \rangle \approx .998 $ and $\Theta_{x,y}\approx .06$.
Question 5: This is a good suggestion, we will add a conclusion in the manner suggested.
Limitations:
Please see the overall author rebuttal for a discussion of the significance of the theoretical and experimental contributions. Building on what we wrote in "Weakness 2" above, some kinds of false positives are impossible because our approximation guarantee is multiplicative and other kinds of false positives are intrinsic to any comparable kind of approximation.
We agree that there is only a small "slice" where 2-layers is better, i.e. when the minimum distances is <.06. However, because we estimate distances up to a multiplicative $(1\pm\epsilon)$ error, the risk of false positives is not any larger in that scenario than it is for the datasets where minimum distances are much larger. So the question of the applicability of the 2-layer map is essentially the question of the existence of datasets where minimum distances tend to often be smaller than .06. This is rare, but it does happen. Our example is the RCV1 dataset which does have pairs of points at distance <.06. Figure 3 (and the updated experiment in the attached pdf) demonstrate that 2-layers does perform better for the 1-nearest neighbor.
---
Rebuttal 2:
Comment: Thank you for the detailed rebuttal! I am currently waiting for the AC to send out the code, as well as the global rebuttal (which I assume also has the attached pdf).
---
Rebuttal 3:
Comment: The authors have addressed my concerns, and I am happy to increase my score. I would suggest the authors to further emphasize the significance of their work (as mentioned in the global rebuttal), and maybe a discussion of false positives with respect to multiplicative error to reach out to the more applied folks.
---
Rebuttal 4:
Comment: Thanks, we appreciate the suggestions and will plan to add some discussion of multiplicative error in the context of applications to the introduction. | Summary: This paper studies the bit complexity of storing the Euclidean distance between n points X up to (1 +- eps) error. The simplest setting assumes all points have ||x||_2 = 1, and all results depend on the "spread" of the point set m = min_{x_1,x_2 in X} ||x_1 - x_2||.
A good comparison for their results is via a JL-random project embedding; the bits required for this is:
- O(n * (1/eps^2) log n * log(1/m eps)
The number of bits required for their approach is
- O(n * (1/eps^2) log n * log(1/m)^{2.3})
Strengths: The main advantage of this approach is that it directly records the embedding as bit vectors (well as [-1, +1] vectors, plus an implicit scaling term depending on the dimension of the data].
The paper appears technically interesting. It introduces an idea of composing maps into bits instead of just a single map, which can show some technical improvement in theory and practice for small distances.
Weaknesses: This might be of some independent interest, but as of now the improvement is quite minor.
This paper reports a very small improvement in very limited cases.
In particular, the improvement occurs when the desired error eps is much smaller than the spread parameter m; but typically for (1 +- eps) error, eps is a constant, so this seems of limited interest.
As a result, I do not think it is worth publishing at NeurIPS.
Technical Quality: 4
Clarity: 3
Questions for Authors: none
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weaknesses: We believe our theoretical contributions are substantial because of the new algorithmic ideas and because our upper bound nearly matches an existing lower bound. See the overall author rebuttal for more details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I agree these results and techniques may be of purely theoretical interest, but then I do not think NeurIPS is the right venue -- at least I am not convinced.
This is only an improvement when eps (typically a constant) is **exponentially** smaller than the spread of the point set m. I am just not convinced this improvement is of interest to any settings relevant to the NeurIPS community.
---
Reply to Comment 1.1.1:
Comment: The improvement is not only an improvement when $\epsilon$ is exponentially smaller than the spread. The actual range of improvement is much better than exponential in theory and even in practice $\epsilon$ does not need to be much smaller that $m$ to see an improvement. The relevant terms to be compared are $\log(1/(m \epsilon))$ and $(\log (1/m))^{2/3}$. In theory, to have $\log(1/(m \epsilon)) \geq (\log (1/m))^{2/3}$ it is sufficient to have $1/\epsilon \geq (1/m)^{(\log(1/m))^{1.3}}$ so the threshold for improvement is no worse that quasi-polynomial (much better than exponential). This can also be verified numerically for practical values of $m$ and $\epsilon$: the largest $\epsilon$ given $m$ where $\log(1/(m \epsilon)) \geq (\log (1/m))^{2/3}$ for $m=1/4$ is $\epsilon=.48$ (so $\epsilon$ can even be larger than $m$) and for $m=1/10$ is $\epsilon=.011$.
Another issue with comparing our algorithmic approach with discretized J-L is that the bound we quote for J-L is intended as an information-theoretical bound only. Our algorithm is straightforward to implement while it is not clear that there is an efficient algorithm that matches the discretized J-L bound. This is because it is based on an optimal (non-algorithmic) epsilon-net of the unit ball and it is not clear what the compression/encoding algorithm is. For each projected point, the natural compression algorithm has to find the nearest point in the (exponentially large) $\epsilon$-net. | Summary: The authors investigate the bit complexity of distance preserving embeddings of points on the unit sphere and in the unit ball.
Their main finding is that iterative application of Charikar's hyperplane SimHash could use slightly fewer bits of storage than snapping a random projection to an epsilon net for certain parameters.
Existing prior work could compress a set of points to even fewer bits while preserving distances all these methods operate on the entire set. The authors' method compresses data points (vectors) individually, which is an advantage.
Empirical evaluation with synthetic and small scale real world data support the theoretical claims.
Strengths: The authors study a broadly applicable problem.
The proposed method is simple to implement and comes with theoretical guarantees.
The experiments support the theory.
Weaknesses: The reduction in the number of bits needed only impacts lower order factors, as explained on line 77.
Could you please discuss the many quantization methods cited on line 173 and compare yours with them both analytically and experimentally?
Figures 1 and 2 lack baselines that compress to bits, could you please add a few, including quantizing (epsilon-netting) a random projection?
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you compare the theoretical predictions of Theorem 6 with the empirical findings of Figure 1 that for distances < 0.06 two layers have lower error than one layer?
Line 340: "six time the output dimensions." Could you elaborate how 6 was chosen?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are sufficiently discussed. (Mostly) theoretical work, no negative societal implications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Weakness 2: Of the quantization methods discussed on line 173, the most relevant one is known as "sign random projections" [6] and some of those other papers generalize sign random projections in various ways. We will add a remark saying the our 1-layer map $\varphi_1$ is the same as the original sign random projection and that the main novelty of our work is composing multiple sign random projections.
Concerning the experiments, paper [6] is essentially the same as our 1-layer map, with the same guarantees. Papers [4],[18],[21],[31] are about compressive sensing and [24] is about quantizing random Fourier features so there is no direct comparison to be made. Paper [28] provides an additive error guarantee, ours is multiplicative. Paper [10] and [23] have no theoretical estimates for the number of bits required as in our Theorem 2, instead they take the approach of bounding the variance of the estimator.
Weakness 3: As our main contribution is theoretical, we did compare our technique to epsilon-netting a random projection analytically but not experimentally. Part of the difficulty is that for such an experiment it is not clear how to choose the tradeoff between the projection dimension and the size of the epsilon net (A fair experiment would have to know the optimal choice of the size of the epsilon net compared to the projection dimension.)
Weakness 4: We sent the code to the AC per the instructions. We will publish the code and add a link to the paper.
Question 1: According to Theorem 6, the $\ell$-layer map approximates distances up to an additive $\pm \epsilon \|x-y\|^{2-2^{-\ell+1}}$ error. Say that we use the 1-layer and 2-layer map with the same output dimension $N$. Looking at the definition of $N$ in Theorem 6 and solving that equation for $\epsilon$ we get that the 1-layer map approximates squared distances up to an additive $\frac{\sqrt{48 \log n}(\pi/\sqrt{2})\|x-y\|}{\sqrt{N}}$ error and that the 2-layer map approximates squared distances up to an additive $\frac{\sqrt{48 \log n}(\pi/\sqrt{2})^2\|x-y\|^{3/2}}{\sqrt{N}}$ error. So from this we conclude that 2-layers is better if $\|x-y\|$ is sufficiently small. However, in the context of the experiment in figure 1, Theorem 6 is not able to accurately predict exactly how small $\|x-y\|$ needs to be to make 2-layers better. The reason is that Theorem 6 assumes that $\epsilon<1- \langle x, y\rangle = \frac{\|x-y\|^2}{2} = \frac{1}{200}$ when $\|x-y\|=.05$. Theorem 6 then says that the output dimension $N$ is bigger than $\epsilon^{-2}$ so bigger than 40,000, but in the experiment in Figure 1, $N$ is only 1000. Overall, this shows that Theorem 6 is not best possible and future work could attempt to determine what one can prove without the assumption that $\epsilon<1- \langle x, y\rangle$.
Question 2: We chose 6 experimentally. In particular, using 6 instead of 2 made the error significantly smaller. But, for example, using 8 or 10 instead of 6 offered no significant improvement.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanation and clarifications. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for your constructive and helpful feedback. We have the following comments about our contributions and the experiments for all the reviewers:
The main contribution of the paper is theoretical, including algorithmic ideas and their analysis. We believe we have strong contributions and novelty there. Our contributions are about a very basic algorithmic problem: the approximation of distances up to a multiplicative $1\pm\epsilon$ error. While compared to existing upper bounds (algorithms) our improvement appears modest, it is very strong if one takes into account the known lower bounds. Namely, our algorithm's upper bound is very close to being optimal because it matches the lower bound up to the power of the $\log(1/m)$ factor. As we explain in section 1.4, it is known that for the one-way communication version of the sketching/compression problem,
$$
\Omega(\epsilon^{-2}n \log(n/\delta) \log(1/m))
$$
bits are necessary if the algorithm is to be successful with probability $1-\delta$ and $m$ is the minimum distance. Our technique uses
$$
\Theta(\epsilon^{-2}n \log(n) \log(1/m)^{2.3})
$$
bits. When $m$ is asymptotically smaller than $\epsilon$, this is closer to the lower bound than the number of bits one needs if using regular Johnson Lindenstrauss random projection which requires
$$
\Theta(\epsilon^{-2}n \log(n) \log(1/m\epsilon)).
$$
Please see the discussion in Section 1.4 for additional details.
The intent of the experiments is not to show that the proposed algorithm is an improvement in practice; it is to show that the ideas are actually implementable and have a complexity that is within the realm of other methods (i.e. it is not orders of magnitude worse). The experiments also validate the theoretical idea of "composing maps" (depth) in the following sense: They show that the 2-layer map is better than the 1-layer map for reasonable values of the parameters. To make this point stronger, we also show a real world dataset (RCV1) where the 2-layer map is better than the 1-layer map. We redid the experiment with the RCV1 dataset (i.e., Figure 3 in the paper) without first projecting the data to $\mathbb{R}^{5000}$, please see the plots in the attached PDF. The result is essentially the same as the original experiment, but is somewhat more convincing because now we are recovering the true nearest neighbors of query points in RCV1 rather than the nearest neighbors according to the projected data.
Pdf: /pdf/36eeea497f9d2ffb7032dd89a5ecff9c47ef56a7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Artemis: Towards Referential Understanding in Complex Videos | Accept (poster) | Summary: This paper introduces Artemis, a video-language model for video-based referential understanding. It can describe a target referred by a bounding box in a video. A referential video dataset named VideoRef45K is collected to train the model. Artemis is evaluated on HC-STVG benchmark and outperforms baselines adapted from image-based referring models. Besides referential understanding, Artemis can also perform general video question answering, and serve as a component in multi-round and long-form video understanding.
Strengths: 1. A video-based referential understanding dataset, VideoRef45K, is established. It facilitates the development of the area, providing referential pretraining data.
2. The design of target-specific feature branch in the model architecture is well-motivated.
3. Although no video-based referring model exists, the paper adapts image-based referring models to video as baselines. The adaption method is quite reasonable.
4. Artemis outperforms the adapted image-based baselines significantly on HC-STVG.
5. Artemis can still perform general video question answering and achieve better performance after training on the video referring task. This demonstrates that video referring can boost the reasoning capability of video-language models.
6. Combined with existing video-language models, Artemis can perform multi-round video understanding with grounding and long-form video understanding.
Weaknesses: 1. The RoI selection step of Artemis clusters the object bounding boxes from different frames. However, the clustering algorithm only considers the bounding box coordinates but does not take the visual content in the bounding boxes into account. In some cases, the bounding box of an object may remain unchanged for a long time but the object state keeps changing, e.g., a person standing at a certain location performs a series of actions. Clustering these frames together can compressing them into one would lose valuable information.
2. The proposed Artemis is built based on video-language models Video-LLaVA and Video-ChatGPT. However, these video-language models are not used as baselines in the experiments. Although they are not originally developed for video referring, there is a simple approach to adapt them for the task. As suggested by [1], directly drawing a circle or a rectangle on an image can help VLMs focus on the indicated object. Therefore, one can adapt the video-language models for video referring by drawing the object bounding boxes on the video frames and ask the model "What is the target indicated by the red rectangle doing?". Artemis should be compared with this simple baseline to demonstrate the effectiveness of the RoI feature branch in its model architecture.
[1] Shtedritski et al. What does CLIP know about a red circle? Visual prompt engineering for VLMs. ICCV 2023.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. Does the target-specific features in \<region\> tokens include the positional information of the bounding boxes?
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 4
Limitations: Limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work. We deeply appreciate your constructive comments and have provided point-to-point responses below. We hope our responses address all your concerns, and further comments are welcomed.
**Q1:** *The RoI selection step of Artemis clusters the object bounding boxes from different frames. However, the clustering algorithm only considers the bounding box coordinates but does not take the visual content in the bounding boxes into account. In some cases, the bounding box of an object may remain unchanged for a long time but the object state keeps changing, e.g., a person standing at a certain location performs a series of actions. Clustering these frames together can compressing them into one would lose valuable information.*
**A1**: Thanks for the question. We **did** take visual contents within the bounding box into consideration. Specifically, we computed a token for each RoI which contained the visual features extracted from the bounding box (from the same visual encoder, *i.e.* a pre-trained CLIP ViT-L/14 model) and used the token for clustering. We will add the implementation details to the final paper.
**Q2:** *The proposed Artemis is built based on video-language models Video-LLaVA and Video-ChatGPT. However, these video-language models are not used as baselines in the experiments. Although they are not originally developed for video referring, there is a simple approach to adapt them for the task. As suggested by [1], directly drawing a circle or a rectangle on an image can help VLMs focus on the indicated object. Therefore, one can adapt the video-language models for video referring by drawing the object bounding boxes on the video frames and ask the model "What is the target indicated by the red rectangle doing?". Artemis should be compared with this simple baseline to demonstrate the effectiveness of the RoI feature branch in its model architecture.*
> [1] Shtedritski et al. What does CLIP know about a red circle? Visual prompt engineering for VLMs. ICCV 2023.
**A2**: Thanks for the suggestion! During the rebuttal, we evaluated Video-LLaVA and Video-ChatGPT for video referring using the method you proposed. Specifically, we followed [1] to draw a red rectangle to mark the referred object in each key frame of the video -- please note that tracking is required to mark the object in most frames. Then, we fed the rendered video to the models and asked the question "What is the target indicated by the red rectangle doing?" Results are summarized in the following table. One can see that, even with the help of an offline tracking algorithm, both Video-LLaVA and Video-ChatGPT report significantly lower scores compared to Artemis. This validates the effectiveness of Artemis' design. We will add these contents to the final paper.
| Method | BLEU@4 | METEOR | ROUGE_L | CIDEr | SPICE |
| :---: | :---: |:---: |:---: |:---: |:---: |
| Video-ChatGPT | 1.3 | 10.1 | 20.2 | 5.5 | 11.7 |
| Video-LLaVA | 1.7 | 9.8 | 20.8 | 2.6 | 9.1 |
| Artemis (Ours) | 15.5 | 18.0 | 40.8 | 53.2 | 25.4 |
**Q3:** *Does the target-specific features in <region> tokens include the positional information of the bounding boxes?*
**A3**: Yes. We used positional encoding to incorporate the coordinates of the bounding boxes. Therefore, the <region> token contains the positional information.
---
Rebuttal Comment 1.1:
Comment: I read all the reviews and the authors' responses to them. The responses to my comments are satisfactory and I highly appreciate the additional experiments. Therefore, I raised my rating to 7 (Accept).
---
Rebuttal 2:
Title: Thanks
Comment: We are delighted that our response addressed your question. We appreciate your support for our work. | Summary: This paper introduces Artemis as a robust solution for the video-based referential understanding task. This task involves analyzing complex videos, each spanning 20–30 seconds, where the target performs multiple actions. Given a video, the Multimodal Large Language Model (MLLM) attempts to answer questions such as "What is the target <region> doing in this video?" with <region> referring to a bounding box in any video frame. Artemis follows the general design principles of modern MLLMs, such as visual instruction tuning. To extract target-specific video features, Artemis employs a straightforward yet effective approach involving (i) tracking the target over time and (ii) selecting informative features from a comprehensive list of regions of interest (RoIs). The training of Artemis consists of three stages, with the first two stages being similar to LLaVA. For the final stage, this paper introduces the VideoRef45K benchmark, comprising 45,000 video question-answer pairs, with box-level prompts and answers for complex videos. Experiments demonstrate the promising performance of Artemis across various quantitative metrics, including BERT score, BLEU, and more.
Strengths: This paper is well-written and easy to follow. The authors present a straightforward solution to address video-based referential understanding, a relatively unexplored research area so far as shown by the authors. The method adopted is simple yet aligns well with the paper's motivation. Additionally, the experimental section is robust and well-executed.
Weaknesses: 1. As far as I know, Multi-Object Tracking (MOT) is far from satisfying in accurately tracking target regions, particularly in challenging scenarios such as motion blur and occlusion. Although this paper mentions these issues in its limitations section, it does not discuss them in detail. My concern is that the method performs well on the presented benchmarks because the scenarios to be tracked are relatively simple. It would be beneficial if the authors could provide the performance of HQTrack on these benchmarks or offer examples showing the method's efficacy in complex scenarios (e.g., multi-person tracking).
2. This paper does not delve deeply into learning the temporal relationships among tracked Regions of Interest (RoIs). Therefore, the temporal encoding knowledge of these RoIs mainly derives from HQTrack. Since the improvement of this model in video-based referential understanding primarily stems from its better comprehension of temporal dynamics, it raises the concern that the benefit may come from HQTrack rather than the model itself.
3. Comparisons with other models (e.g., Shikra, Merlin) seem somewhat unfair, as these models do not utilize the dynamic knowledge from HQTrack. A fairer comparison setting would enhance the validity of the results.
I am willing to upgrade my rating, if the authors can address my above concerns.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am a bit surprised that MLLMs for video referring and grounding have not been explored yet. As I am not very familiar with referential understanding, I look forward to more feedback from other reviewers. If this paper does not overlook any important references, I am willing to upgrade my rating.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work. We deeply appreciate your constructive comments and have provided point-to-point responses below. We hope our responses address all your concerns, and further comments are welcomed.
**Q1:** *As far as I know, Multi-Object Tracking (MOT) is far from satisfying in accurately tracking target regions, particularly in challenging scenarios such as motion blur and occlusion. Although this paper mentions these issues in its limitations section, it does not discuss them in detail. My concern is that the method performs well on the presented benchmarks because the scenarios to be tracked are relatively simple. It would be beneficial if the authors could provide the performance of HQTrack on these benchmarks or offer examples showing the method's efficacy in complex scenarios (e.g., multi-person tracking).*
**A1:** Good suggestion! Indeed, none of the existing MOT algorithms are close to satisfactory. The original tracking results in HC-STVG were produced by SiamRPN, an early video tracking model with lower accuracy. Although we upgraded the tracking algorithm to HQTrack, it may still fail to track all the objects, especially in complex scenarios.
During the rebuttal, we delved into the test set of HC-STVG and found several examples of multi-person tracking in complex scenarios. We show the tracking and video-based referring results in Figure 19 in the `attachment`. As shown, HQTrack sometimes fails to track the object throughout the video clip -- due to the unavailability of ground-truth labels, we cannot quantitatively compute its accuracy. Regarding the referring results, Artemis produces correct descriptions as long as the tracked boxes are accurate. Interestingly, even when the object is missing in some frames, Artemis can (sometimes) produce correct descriptions based on the visual information from other frames.
We will add these examples and analysis to the final paper.
**Q2:** *This paper does not delve deeply into learning the temporal relationships among tracked Regions of Interest (RoIs). Therefore, the temporal encoding knowledge of these RoIs mainly derives from HQTrack. Since the improvement of this model in video-based referential understanding primarily stems from its better comprehension of temporal dynamics, it raises the concern that the benefit may come from HQTrack rather than the model itself.*
**A2:** Thanks for the question. We totally agree that incorporating richer temporal information and/or knowledge is beneficial for video understanding.
(1) Indeed, our work did not introduce an extra module to formulate the temporal information of the tracked RoI, *e.g.* how it changes throughout the video clip. During the rebuttal, we investigated more examples and found that the model has acquired a preliminary ability to describe the temporal patterns of a video -- see Figure 16 in the `attachment`, where Artemis produces reversed descriptions (a woman walking *down* and *up* the stairs) when the input video is played through the regular and reversed directions. This implies that the MLLM can learn extra temporal knowledge beyond the tracked RoIs. Such abilities may stem from the self-attention module of the MLLM that summarizes the sequential visual features.
(2) An important discovery of our work is that making proper use of temporal knowledge can largely boost the accuracy of video-based referring. We show a preliminary solution (*i.e.* using an off-the-shelf tracking algorithm to compensate the temporal knowledge), and our efforts reveal a future direction to equip the MLLMs with this ability (*e.g.* one can prompt MLLMs to track the referred objects). Respectfully, we believe that introducing a tracking algorithm (*e.g.* HQTrack) and making the system work is part of our technical contribution.
We will explore stronger solutions in the future. The above discussions will be added to the final paper.
**Q3:** *Comparisons with other models (e.g., Shikra, Merlin) seem somewhat unfair, as these models do not utilize the dynamic knowledge from HQTrack. A fairer comparison setting would enhance the validity of the results.*
**A3:** Good question! Actually, in evaluating Shikra and Merlin for comparison, we used the same set of tracked bounding boxes (produced by HQTrack) to compute the required visual information: for Merlin, visual features of the entire image were fed into the MLLM together with the bounding box information (in texts); for Shikra which cannot process multiple images simultaneously, both the images and tracked bounding boxes were provided. Therefore, both Shikra and Merlin made use of the dynamic knowledge from HQTrack and thus the comparison was fair to our best efforts.
We will add the above clarification to the final paper.
**Q4:** *I am a bit surprised that MLLMs for video referring and grounding have not been explored yet. As I am not very familiar with referential understanding, I look forward to more feedback from other reviewers. If this paper does not overlook any important references, I am willing to upgrade my rating.*
**A4:** Here we offer some information for your reference. To the best of our knowledge, two related papers exist prior to our work, namely, PG-Video-LLaVA and Merlin, both of which have been cited in the original submission (see Lines 78--81 in Section 2). PG-Video-LLaVA used off-the-shelf detectors to perform grounding, but the model itself did not have the ability to perform fine-grained video understanding. Merlin studied video-based referring, but it required three manually specified input frames, incurring extra burden for users. Additionally, we also validate the advantage of Artemis over Merlin, the direct competitor.
---
Rebuttal Comment 1.1:
Comment: The authors' feedback has addressed my concern, I choose to raise my rating.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We are happy that our response addressed your question. We appreciate your support for our work. | Summary: The paper proposes to bring fine-grained understanding to multimodal LLMs (MLLMs) by introducing video-based referential understanding task. The paper motivates with the drawbacks of current image- and video-based MLLMs, and the need for region-specific features to answer region-specific questions . The proposed approach expands the set of video-level features with target region-specific features (box-level prompts) via tracking (HoITrack), alignment (RoIAlign) and selection (clustering). The paper ablates the effectiveness of tracking and selection criteria for improved performance.
Strengths: - The paper is well-written, clear, and the motivation for target-specific features is well presented
Weaknesses: - A fair baseline
- It’s great that the paper provides comparison with image-based MLLMs by extending them to videos and using an LLM to obtain video-level answers
- As for the multi-frame approach, it appears that MERLIN is the closest baseline approach. And there seems to be some intersection of pre-training data used for Artemis and MERLIN that includes GOT10K, LaSOT, MeViS
- But the evaluation of the video-based referring ability is done on the test set of HC-STVG (lines 197-198), and the train set of HC-STVG is included in the pre-training data for Artemis but not for MERLIN
- Since MERLIN was not trained or fine-tuned on HC-STVG, it does not seem to be a fair, apples-to-apples comparison that MERLIN be evaluated on the test set of HC-STVG
- On top of that, datasets have biases which includes different label spaces, scene and setting (e.g. HC-STVG is collected from movies), annotation gathering (which can also result in different caption distribution, and hence biasing the eval metrics)
- Ablation
- Assuming “w/o” in Table 2 means that there is no RoI selection (not mentioned or defined in the text), which I’m assuming to mean that there is no <track-instruction>. If the above assumption is true, “w/o” sets up a strong baseline (even better than MERLIN)
- In any case, it seems that a careful ablation study is missing
- (1) w/o <track-instruction>
- (2) w/ <track-instruction> but where <region> features are not RoI features but the key-frame features. This is to establish whether the improvement is brought forth by key-frame selection or region-level features specifically
- Visualization and sanity check post-training
- Do the authors have an example of a video with multiple different regions? Mainly this is to inspect how the model response changes with selection of different regions in the same video, and whether it doesn’t degenerates to the same response?
- Less important, but do the authors have an impression on if the video is reversed, does the caption change?
- Human evaluation
- Lastly, to substantiate the effectiveness of the approach, did the authors think about a human evaluation study on accuracy / relevance of predicted captions to the video-question pair?
- This could also be done within your group and with anonymized predictions (meaning the human evaluator doesn’t know what prediction is from what model)
## Minor
- A bit more about the architectural details would have been great (at least in supplemental), especially the tokenization process and whether / how start-end tokens for <instruction> were used
- Do the authors have some statistics on the object category of <region> in the test set of HC-STVG?
- This is mainly to identify what biases the models are dealing with, and whether those biases skew heavily in one direction. For example, "person" may be the majority category
- Similarly, any statistics of actions being performed and asked in questions?
- Lastly, do the authors have a quantitative breakdown of performance to understand where the model fails?
Technical Quality: 3
Clarity: 4
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contributions of our work. We deeply appreciate your constructive comments and have provided point-to-point responses below. Further comments are welcomed.
**Q1:** *A fair baseline.*
**A1**: During the rebuttal, we fine-tuned Merlin on the same data (*i.e.* VideoRef45K) using LoRA (same as Artemis). We extracted 5 key frames with a bounding box for each clip, aligning with Artemis which uses 5 RoI tokens.
Results are shown in the following table. Fine-tuning brings a significant improvement to Merlin, but the metrics are still lower than Artemis. The reason is two-fold. (1) Artemis introduces RoI tracking, clustering, and selection to obtain accurate localization, so that extracted visual features are of higher quality. (2) Merlin only sees 3--8 frames, while Artemis' encoding method (see Lines 116--122) preserves richer information.
|Method|BLEU@4|METEOR|ROUGE_L|CIDEr|SPICE|
|:---:|:---:|:---:|:---:|:---:|:---:|
|Merlin|3.3|11.3|26.0|10.5|20.1|
|Merlin (ft)|9.7|14.2|35.7|35.1|21.9|
|Artemis (ours)|15.5|18.0|40.8|53.2|25.4|
We will add them to the paper.
**Q2:** *Ablation.*
**A2**: There are misunderstandings. Explanations below (we will update the notations).
First of all, "w/o" is **not** the baseline, but indicates the option that utilizes the RoI features to encode the referred object based on Video-ChatGPT. That said, "w/o" is a part of the proposed method, which reports higher performance than Merlin.
To facilitate a more intuitive comparison, we added a new baseline. Given an object of interest, we enclosed its location in each frame with a rendered red rectangle, encoded the video using Video-ChatGPT, and asked "What is the object in the red rectangle doing in this video?". As shown, this "baseline" achieves slightly lower results than that of "w/o".
We also added a new option named "w/ <track-instruction>", where the <region> features were replaced with visual features (the `CLS` token of CLIP-ViT-L/14) in the key frames. As shown, there is a performance drop compared to Artemis. This is because the RoI features are of higher quality unlike the whole-frame features impacted by background.
|Method|BLEU@4|METEOR|ROUGE_L|CIDEr|SPICE|
|:---:|:---:|:---:|:---:|:---:|:---:|
|baseline|11.2|16.3|34.9|23.8|21.4|
|w/o|13.9|16.9|39.1|43.7|23.2|
|w/ \<track-instruction\>|13.9|16.9|38.2|42.1|23.1|
|Uniformly|14.2|17.2|39.4|44.5|23.6|
|Artemis (Ours)|15.5|18.0|40.8|53.2|25.4|
**Q3:** *Visualization and sanity check.*
**A3**: Figure 15&19 (`attachment`) shows how Artemis produces different (and correct) answers for different <region>s, as long as tracking is correct. Figure 16 shows how Artemis produces correct descriptions for regular and reversed videos, *i.e.* the woman is walking *down* the stairs in the regular video, and walking *up* the stairs in the reversed video. We will add them to the paper.
**Q4:** *Human evaluation.*
**A4**: We randomly selected 100 videos from the HC-STVG test set. For each case, we applied Artemis and the fine-tuned Merlin (see **A1**) to produce the answers, and asked a person to evaluate their quality (video and ground-truth are provided). The evaluator is **not** aware of which answer is from which model. The evaluator gives a score of 1--5 to each answer (1=worst and 5=best). On average, Artemis and fine-tuned Merlin score 3.36 and 2.65, respectively. Artemis wins Merlin in 53/100 cases, and loses in 21/100 cases. We will add them to the paper.
**Q5:** (Minor) *More architectural details.*
**A5**: We will add them to the paper.
Artemis consists of three components: a visual encoder (CLIP-ViT-L/14), a large language model (Vicuna-v1.5-7b), and a RoI feature extraction module. The RoI feature extraction module utilizes the visual features of 4 layers of the visual encoder to extract RoI features and passes them to a linear layer to obtain the RoI token.
For text tokenization, we use Vicuna-v1.5's built-in tokenizer. To insert the video and RoI features into Q&A, we use \<image\> as a placeholder for video features, with its ID=-200 indicating the position of video features. Similarly, we use <bbox> for ROI tokens with ID=-500. No additional tokens like <image-start> and <image-end> are used.
**Q6:** (Minor) *Statistics on object category/actions.*
**A6:** All referred objects in the HC-STVG test set are humans (referred to as different nouns like man/woman). In the training set (VideoRef45K), there are other categories (*e.g.* animals/vehicles) -- see Figure 8 in Appendix A; the actions of these objects are much simpler compared to humans in HC-STVG, so we chose HC-STVG test set to challenge Artemis and others.
We used SpaCy to extract and count the action types in the HC-STVG test set. The distribution of 384 actions is shown in Figure 17 (`attachment`).
**Q7:** (Minor) *Breakdown of failure.*
**A7:** We defined five error types:
* Temporal perception error, *e.g.* output is "going down", ground-truth is "going up".
* Incomplete action error, *e.g.* only mentioning "walking" but omitting "turning" in the ground-truth.
* Action recognition error, *e.g.* output is "running", ground-truth is "jumping".
* Object recognition error, *e.g.* output is "woman", ground-truth is "man".
* Multi-object interaction error, *e.g.* output is "woman gave sth. to man", ground-truth is "man gave sth. to woman".
We provided the output and ground-truth of each failure case to GPT-3.5 to get the error type. Statistics are shown in Figure 18 (`attachment`). Most frequent failures come from incomplete action and object recognition, albeit RoI tracking has alleviated them to some extent. Besides, object recognition errors mainly happen in the interacting object, *e.g.*, "the man touches the woman's face" is misrecognized as "the man touches the man's face". This indicates limited improvement of RoI tracking on interacting objects. This enlightens a future direction.
We will add them to the paper. | null | null | Rebuttal 1:
Rebuttal: We thank reviewers for their meticulous work and the insightful comments provided to us.
All reviewers acknowledged the novelty and contributions of the proposed approach (Artemis).
**Reviewer U8HM&CDQu:** The paper is **well-written and clear** and the **motivation is well presented**.
**Reviewer CDQu:** The **method is simple yet aligns well with the paper's motivation**. The **experimental section is robust and well-executed**.
**Reviewer xzsp:** A video-based referential understanding dataset, VideoRef45K, is established. **It facilitates the development of the area**, providing referential pretraining data. The design of target-specific feature branch in the model architecture is **well-motivated**. Combined with existing video-language models, Artemis can perform multi-round video understanding with grounding and long-form video understanding.
The major concerns and suggestions lie in the **fair comparison with other models** (**U8HM**, **CDQu**), **more ablative studies** (**U8HM**), **more model comparisons** (**xzsp**), the **influence of the tracking model** in Artemis (**CDQu**), and more examples and **implementation details** (**U8HM**, **CDQu**).
During the rebuttal, we have carefully considered reviewers' every feedback and provided more experiments and ablation studies, as suggested. We believe these point-to-point responses can address reviewers' concerns and further enhance our work. We also include a PDF file (referred to as the `attachment`) with additional experimental results to support our responses.
Pdf: /pdf/1f03ccaf2e56ac05afd185f8c8fe28e7793a125c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing Large Vision Language Models with Self-Training on Image Comprehension | Accept (poster) | Summary: This paper addresses the problem of acquiring high-quality fine-tuning data for large vision language models (LVLMs) with minimum human effort. The paper presents STIC (Self-Training on Image Comprehension). STIC contains two stages: image comprehension self-training and description-infused fine-tuning. LVLM first generates descriptions based on images and corrupted images and regards each sample as preferred and dis-preferred responses, respectively. The base LVLM is trained on the generated preference data leveraging the directed preference optimization (DPO) framework. Next, the LVLM is fine-tuned on instruction following data. Experimental results show that STIC outperforms the compared methods on diverse benchmarks.
Strengths: - S1: Overall, the manuscript is well-written and easy to read. Preliminaries and figures help readers understand the paper.
- S2: The paper presents diverse analysis and discussion in experiments.
Weaknesses: - W1: The manuscript has limited soundness for several reasons. For example, the authors did not validate the effectiveness of description-infused fine-tuning. In Table 2, much of the performance gain comes from the prompting method (DaR) which is outside STIC.
- W2: While there is a rich literature on existing self-training algorithms [1,2,3,4,5], the paper only discusses recent self-improvement systems, especially in the context of LLMs. This limited investigation leads readers to question what the technical contribution of STIC is compared with the existing line of semi-supervised learning (or self-training) research.
- W3: The paper experimented with a fairly small amount of unlabeled images (6k and 12k images). What and how much data should be used to maximize the performance has not been thoroughly investigated.
**References**
[1] Self-training with noisy student improves imagenet classification. Xie et al., CVPR 2020.
[2] Rethinking pre-training and self-training. Zoph et al., NeurIPS 2020.
[3] Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Sohn et al., NeurIPS 2020.
[4] Revisiting self-training for neural sequence generation. He et al., ICLR 2020.
[5] The dialog must go on: improving visual dialog via generative self-training. Kang et al., CVPR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Q1: In Algorithm 2, why aren’t the model-generated description $\mathbf{y}_{\mathrm{des}}$ used for fine-tuning?
- Q2: What is the motivation of the stage 2 (Description-infused fine-tuning)? It is not directly related to the motivation of the paper (difficulty of obtaining high-quality fine-tuning data).
- Q3: How can we validate the effectiveness of description-infused fine-tuning? Did the authors check the performance of the method which did not use the model description?
- Q4: Table 1 just shows the performance with and without STIC. How much does each stage contribute to performance improvement?
- Q5: Table 2 shows that the describe-and-respond (DaR) prompting method improves overall performance. DaR was not mentioned until the section for experiments. Why did not the authors describe DaR in detail in the method section?
- Q6: Is there any reason why STIC randomly selects unlabeled images? Some self-training algorithms present the methods for selecting unlabeled data to learn.
- Q7: How can we guarantee that all samples generated by the LVLM inputting clean images are preferred responses?
**Typos**
Figure 1: SPIC to STIC
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations were adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback. Please find our responses below. We hope that our clarifications and additional experiments resolve the misunderstanding.
### **W1/Q3: In Table 2, much of the performance gain comes from DaR.**
We respectfully point out that there is a misunderstanding of the results presented in Table 2. As we have discussed in line 278-283 of our submission, Table 2 (row 1-2) showed that DaR prompting alone even results in degradations, while only combining with STIC achieves the best result. And contrary to the your conclusion, the 3rd row for **STIC without DaR** clearly showed that STIC fine-tuning itself leads to significant improvements across all tasks, with the average increasing from 54.8 to 57.6. Notably, it achieves the best results on MMBench and MM-Vet, proving its soundness. In the 4th row, we combined STIC with DaR. While DaR prompting alone is not sufficient, the integration of STIC significantly boosts the model's image comprehension ability and consequently effectiveness of DaR.
More analysis of the stages is provided in [our global rebuttal (Global A2)](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7).
### **W2: Existing self-training methods for vision models are not discussed.**
Thank you for mentioning the references, we will include them in discussion in our revision. Specifically:
- **Similarity**: The major goal shared by the mentioned papers and STIC is designing an effective way of leveraging unlabeled data to improve the performance of current models.
- **Differences**:
1. (Focus/Model) The mentioned papers focus on representation learning of deep learning models. Meanwhile, STIC focuses on the vision LLMs, and the backbone remains an LLM. While previous deep learning models are grounded in representation and further specialized in specific task, LLMs are autoregressive and easily generalize to different tasks. Instead of training for better image representations, STIC aims to gather synthetic data for the LLM to produce higher-quality responses to a query on an image.
2. (Algorithm) We focus on **alignment fine-tuning**. Notably, classic self-training algorithms for vision models do not employ alignment algorithms like RLHF or DPO. While providing a positive training signal is beneficial, having negative examples is crucial for the success of LLM alignment. As shown in Table 3 of our submission, including only positive examples for SFT is not as effective as having pairwise preference data.
### **W3: A small amount of unlabeled images**
In our discussion with POVID in Section 6 Figure 5, we highlighted the **data efficiency** of STIC. While POVID uses 17k SFT data, STIC achieves better results with a total of 11k data (5k SFT and 6k unlabeled). STIC requires a smaller amount of data to achieve significant improvements.
We conducted an additional experiment using 30k unlabeled images, as shown in Figure 1 of our [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf). The results demonstrate the scalability of STIC when applied to larger datasets.
### **Q1: Why aren’t model-generated description 𝑦_des used for fine-tuning?**
We will correct this typo in Algorithm 2 in our revision. The correct data used for fine-tuning should be $([v^{(i)}, y_{des}, x^{(i)}], y^{(i)})$. Algorithm 2 indeed aims to use the model-generated description and append it before the question prompt for fine-tuning.
### **Q2: Motivation of Stage 2**
As explained in our method section, stage 2 is aimed to further fine-tune the model to leverage self-generated image descriptions for downstream tasks, and thus help ground its reasoning ability on such descriptions. While stage 1 improves the model’s ability in image description, it does not focus on leveraging these descriptions for subsequent tasks such as question answering. Stage 2 fine-tunes the model by reusing a small amount of its SFT data and infusing it with self-generated descriptions to specifically strengthen model's reasoning ability based on descriptions.
### **Q4: How much does each stage contribute to performance improvement?**
Thank you for raising this important point. Please see our response in [Global A2](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7).
### **Q5: DaR was not mentioned early.**
DaR was discussed in line 211-213 of our submission and was not elaborated due to page limit. Please see our elaborated explanation in [Global A3 of our global rebuttal]((https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7)). We will add a paragraph in our method section in revision.
### **Q6: Why STIC randomly selects unlabeled images.**
Data selection is indeed a promising future direction and we will include this discussion in revision.
The current implementation of STIC randomly selects unlabeled images as it simplifies the process and ensures a broad and diverse sampling of data. In Table 3 of our [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf), we included an experiment using the same amount of images but more diverse distribution (Vision Flan) for stage 1. Notably, the increased diversity led to further improvements in STIC, suggesting the potential for enhancement with better sets of unlabeled images.
### **Q7: How can we guarantee that all samples generated by LVLM on clean images are preferred?**
Regarding the specific question on clean images vs. corrupted images, we note that this approach has become widely-used and tested in concurrent works focusing on LVLM alignment [1]. In our paper, Figures 3, 9, and 10 also showed that image corruptions indeed cause observable declines in model output quality. Please also see our response in the [global rebuttal (Global A1)](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7) for a detailed explanation on preference alignment.
[1] Aligning modalities in vision large language models via preference fine-tuning.
---
Rebuttal Comment 1.1:
Title: Invitation for discussion
Comment: Dear reviewer uhYN,
Thank you again for your detailed and valuable feedback to this paper. We hope that our responses and clarifications have adequately addressed your questions and concerns. Specifically,
1. We provided detailed clarifications on the results of Table 2 and explanations on DaR (Global A3).
2. We added a discussion paragraph on the similarities and differences with the mentioned related works, emphasizing our specific focus. These works will be incorporated into our revision.
3. We conducted additional experiments scaling up the data to 30k and explained the data efficiency of our method.
4. We provided clarifications and explanations for each of your specific questions.
We hope these responses adequately address your concerns. If you have any further questions about our rebuttal, we'd be happy to provide additional information or clarification. Thank you once again for your time and efforts! | Summary: Summary:
The paper presents STIC, a method to enhance LVLM by reducing the need for labeled data. STIC generates image descriptions using unlabeled images and improves reasoning by reusing existing instruction-tuning data. It demonstrates performance gain across seven benchmarks, showing potential to effectively leverage vast quantities of unlabeled images for self-training.
Strengths: Strength:
1. The proposed method improves VLLM's performances with efficient cost during data collection.
2. The performance improvement is consistent on diverse benchmarks and achieves an average accuracy gain of 3.8%, which is quite significant.
Weaknesses: Weakness:
1. Interesting idea about using different prompts to generate both good and bad captions. However, I wonder if there is any method needed to ensure the correctness of the prompts generated by the "step-by-step" prompt strategies. Based on my own observations, VLLM is not good at following instructions, which means the description will be even worse if the prompts are complicated. Have the authors observed similar problems?
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors changed the default latex template? There should be no anonymous submission ID. And it seems that the authors gain more space because they remove the anonymous authors part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and make clarifications as below.
### **W1. If there is any method needed to ensure the correctness of the well-crafted prompt?**
Thank you for raising this important question. The concern regarding the complexity of prompts is indeed crucial. To address this, we implemented restrictions and human filtering on multiple candidate prompts generated by GPT-4 to ensure they function as intended. This process can be viewed as a behavior distillation from a stronger model.
To ensure effectiveness, we tested these prompts on MSCOCO samples and verified the LVLM’s instruction-following quality through human evaluation. We provide a detailed explanation along with additional experiments in our [global rebuttal (Global A1)](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7), and we will include this discussion and the additional experiments in our revision.
Regarding the concern about instruction-following abilities, which may be weaker in untrained models, we found that DPO alignment fine-tuning significantly enhances this capability. This allows the model to learn not only the preferred response but also to identify and avoid dispreferred and often erroneous responses.
Verification remains an intrinsically challenging problem, especially since there is no ground truth answer in the context of image description tasks with unlabeled images. To mitigate this, we applied a human filtering strategy for the prompts, which has proven effective and not costly. To scale up our method and move towards a fully autonomous framework, we plan to involve a critic model in the process in our future studies.
### **Q1. Have the authors changed the default latex template?**
We apologize for any confusion. We did not intentionally alter the template. It appears we inadvertently included the submission ID. We appreciate your understanding and will ensure this mistake is corrected. We checked that with the new template, the paper can still be fit into the page limit.
---
Rebuttal Comment 1.1:
Title: Invitation for discussion
Comment: Dear reviewer s8oQ,
Thank you again for your time and feedback to this paper. We hope that our clarifications have adequately addressed your questions. Specifically, we provided detailed explanations into the prompts of our method and further discussion in Global A1. We are following up to inquire if there are any remaining questions. We are more than happy to further discuss and provide clarifications.
Thank you once again for your time and efforts!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer s8oQ,
Thank you again for taking the time to review our paper. We appreciate your acknowledgment of our work's soundness, presentation, and contribution.
In response to the feedback received, we have included extensive additional experiments as well as detailed clarifications in our rebuttal. While other reviewers have responded positively to our rebuttal, we hope to adequately address your concern as well. We would greatly appreciate your attention to our specific response regarding your question on prompt design, as we believe it addresses your primary concern. It would be great to also give us the opportunity to provide more details and further improve our work. Thank you! | Summary: This paper proposes a two-stage method to enhance Large Vision Language Models (LVLMs) using unlabeled images. In the first stage, well-designed good and bad prompts are used to make the LVLM generate preferred and dis-preferred completions, respectively, conditioned on the unlabeled images (from COCO). Then, direct preference optimization (DPO) is used to fine-tune the LVLM using the generated preferred and dis-preferred completions. In the second stage, the fine-tuned model is used to generate descriptions for images in an instruction-tuning dataset (from LLaVA's data). The generated descriptions are inserted into the instruction-tuning data to fine-tune the model. The fine-tuned model shows significant improvement over the baseline (the LVLM before fine-tuning) on seven VLM benchmarks.
Strengths: - The biggest strength is the significant improvement over the baseline LVLM achieved by the proposed method. There is an average improvement of 4 points on the seven VLM benchmarks.
- The proposed method mainly leverages unlabeled images for training, which gives the proposed method a great potential to use a vast amount of unlabeled images. The authors also show using more unlabeled images for training improves the performance.
Weaknesses: - The paper raises some questions unanswered.
- The effect of the prompt set is less explored in the paper. It seems that the prompts play a crucial role in data construction. It is unclear how the authors designed the well-curated captioning prompt and the hallucination prompt set. Are there any principles behind the design? Especially for the well-curated captioning prompt that generates the preferred data, how do different design choices affect the final model performance in the evaluation?
- Is the performance gain dependent on the MSCOCO data set? Do other image datasets (such as Flickr30k) work in stage 1?
- It seems counterintuitive that fine-tuning an LVLM on MSCOCO will help improve its performance on science-related benchmarks like ScienceQA. It will help us better understand the mechanism by comparing the model generations in the benchmarks before and after using the proposed method.
- The description of describe-and-respond (DaR) prompting is a bit unclear. I could not fully understand the setting.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What is the difference between the image captioning prompt set $P$ and the well-curated captioning prompt (Algorithm 1)?
- What is the prompt $x$ used in DPO in stage 1?
- What is the model performance if only stage 1 is performed (i.e., without stage 2's SFT)?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your support and the constructive feedback that helped us improve our work. Please see our detailed response with additional experiments below.
### **W1a: Principles behind the prompt design.**
In short, we use GPT-4 to generate and sample multiple initial prompts, which are then refined through human filtering. To ensure effectiveness, we test these prompts on MSCOCO samples, verifying their ability to produce well-structured and relevant responses from the model. Using DaR performance as an evaluation to the prompts (Table 2 of the [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf)), we showed that the better crafted prompts result in better performance for DaR even on plain models.
Please see our detailed response in [Global A1 in the global rebuttal](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7). We will include the discussion and experiments in our revision.
### **W1b: Is the performance gain dependent on MSCOCO data?**
Thank you for raising this important point. To address this, we conducted additional experiments using images from various sources. Specifically, we utilized the Vision Flan dataset (VFLAN: https://huggingface.co/datasets/Vision-Flan/vision-flan_191-task_1k) for stage 1 image comprehension self-training. This dataset includes images from 191 diverse vision tasks, providing a broad spectrum of image types.
We ensured a fair comparison by maintaining the same sample size (randomly sampled 6,000 images) and have presented the experimental results in Table 3 of the [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf). The results indicate that our approach improves consistently across different datasets, demonstrating its robustness and adaptability. Notably, the increased diversity of VFLAN led to further improvements in STIC, suggesting the potential for even greater enhancement with better sets of unlabeled images. This finding aligns with our analysis in Figure 8 of the main paper, where we observed a positive correlation between the overlap of MSCOCO's image distribution with a benchmark and the performance gains achieved by STIC on that benchmark.
### **W2: It’s better to compare model generations before and after using STIC.**
Thank you very much for the suggestion. In Figure 1 of our submission, we showed an example of model generation before and after using STIC. In line 75-76, we provided a discussion that STIC improves the model response by successfully identifying the key visual information for subsequent reasoning.
In the [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf), we included two additional model generation examples before and after applying STIC, as illustrated in Figures 2 and 3. Despite the task being focused on mathematical reasoning, STIC enhanced the model’s response by improving its image comprehension capabilities. While the original model merely identified one of the recognized numbers in the image as the final answer, the STIC fine-tuned model was able to interpret the meaning of each number, describe them accurately, and perform reasoning based on this understanding.
Furthermore, in Figure 8 and its corresponding ablation study of our main paper, we examined the improvement in ScienceQA, where it shares a great overlap between the MSCOCO image distribution.
### **W3: Further explanations on DaR prompting.**
We apologize for the lack of clarity in our initial presentation on DaR. Please see our explanation in [Global A3 of our global rebuttal](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7). We will add a paragraph in our method section to fully illustrate DaR in the revision.
### **Q1: Difference between the image captioning prompt set and the well-curated prompt.**
We detailed the image captioning prompt set in our answer to Q6. Here are the key differences between the image-captioning prompt and well-curated captioning prompt:
**Image Captioning Prompt Set**: This set comprises concise and straightforward prompts designed to elicit basic image descriptions. These prompts typically ask the model to describe the image in simple terms without additional guidance or structure.
- Purpose: The prompts in set P serve as the target task.
**Well-Curated Prompt**: These prompts are designed to be more elaborate and structured, crafted to elicit higher-quality responses by encouraging the model to engage in a more systematic reasoning process.
- Purpose: Generate superior responses to provide a learning signal for preferred/positive responses.
This process alone (without the negative/dispreferred responses) is similar to the currently popular method called system 2 distillation [1], where the model is fine-tuned on its responses generated from a more complex, step-by-step prompt. The goal is to teach the model to apply the enhanced reasoning patterns induced by the well-curated prompts when responding to the simpler prompts (e.g. in set P).
We will add the above explanation in our next revision.
[1] Distilling System 2 into System 1
### **Q2: What is the prompt 𝑥 used in DPO in stage 1?**
We included some of the prompts below due to character limit and will include the full set of eight prompts in our future revision.
- "Illustrate the details of the picture.",
- "Summarize the visual content presented.",
- "Explain what is depicted in the photograph.",
- …
### **Q3: Model performance if only stage 1 is performed.**
Thank you for raising this important point. In short, while stage 1 focuses exclusively on enhancing the perception capabilities of LVLM, it still notably improves performance on downstream VQA tasks (1.1\% accuracy gain on ScienceQA).
Please see our detailed response in [Global A2](https://openreview.net/forum?id=FZW7Ctyjm3¬eId=VV3MK6FiS7) on the progression of stages.
---
Rebuttal Comment 1.1:
Title: Invitation for discussion
Comment: Dear reviewer B7oX,
Thank you again for your support and constructive comments. We hope that our responses and clarifications have adequately addressed your questions and concerns.
Specifically, we provided detailed explanations toward the prompt design (Global A1) and DaR (Global A3). We further added an experiment on unlabeled images from a different distribution than MSCOCO, where we observed that a more diverse unlabeled image data can provide better improvement for STIC. Regarding W2 and Q3, we provided specific generation examples before and after STIC, as well as its stage-wise performance on ScienceQA.
We would like to inquire if there are any questions about our rebuttal, for which we're happy to provide additional information and further clarifications. Thank you once again for your time and efforts on this paper!
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed responses. My questions are addressed. I will raise my rating to 6.
---
Reply to Comment 1.2.1:
Comment: Thank you for replying! We greatly appreciate your positive feedback on our rebuttal. | Summary: This paper introduces Self-Training on Image Comprehension (STIC), which emphasizes a self-training approach specifically for image comprehension. First, the model self-constructs a preference dataset for image descriptions using unlabeled images. Preferred responses are generated through a step-by-step prompt, while dis-preferred responses are generated from either corrupted images or misleading prompts. To further self-improve reasoning on the extracted visual information, the model reuses a small portion of existing instruction-tuning data and appends its self-generated image descriptions to the prompts. Improvements in several benchmarks are reported.
Strengths: 1. The challenges of self-training with VLM are discussed, which is appreciated.
2. The proposed STIC approach is claimed to be a novel two-stage self-training method that targets both image perception and reasoning over images and texts, which is intriguing.
3. STIC does not require pre-labeled information on the images,
4. The methodology of constructing dis-preferred data using bad prompting is pretty interesting.
Weaknesses: 1. The experiments are conducted with 7B-level LLava 1.5 and 1.6. The method's scalability remains questionable.
Technical Quality: 3
Clarity: 3
Questions for Authors: What if we try the proposed method with smaller or bigger models or other ViT families (e.g., EVA-CLIP models)? With a higher representational capacity, will the model benefit more or less from self-training?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong support and valuable feedback! We address your major comment as follows.
### **W1: The scalability of STIC**
To explore STIC's applicability to models with higher representation capacity, we conducted supplementary experiments using LLaVA-v1.6 (Vicuna-13B).
| **Model** | **LLaVA-Bench** (Conv) | **LLaVA-Bench** (All) | **MM-Vet** (Gen) | **MM-Vet** (All) | **MMBench** |
|------------------|-----------------|-----------------|-----------------|------------|-------------|
| LLaVA-v1.6 (7B) | 61.3 | 77.3 | 32.5 | 42.2 | 63.7 |
| LLaVA-v1.6 (13B) | 73.8 | 84.5 | 45.2 | 48.9 | 70.6 |
| LLaVA-v1.6 (13B) w/ STIC | **78.1** | **85.6** | **49.4** | **50.5** | **72.3** |
Table 1 in the [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf) shows the detailed and comprehensive experiment results. Due to compute and time constraints, we included the three benchmarks (LLaVA-Bench, MM-Vet and MMBench) that include various different tasks and can comprehensively evaluate the model’s performance. We used the same images for STIC fine-tuning as in our experiments for LLaVA-v1.6 (Mistral-7B) to ensure fairness and the same set of hyperparameters due to time constraint. The improvements observed with LLaVA-v1.6 (Vicuna-13B) demonstrate that STIC is not only effective with smaller models but also scales well with larger or more capable LVLMs. It also shows potential for further improvement through hyperparameter tuning, data filtering, and enhanced data generation.
We hope that our additional experiments address your raised concerns. Let us know if there remain further questions, and we are happy to discuss them.
### **Q1: What if we try the proposed method with different LVLM models?**
Thank you for this insightful question. In our original and additional experiments, we employed STIC with LLaVA-v1.5 and LLaVA-v1.6, incorporating various LLM backbones at different scales, specifically Vicuna-7B, Mistral-7B, and Vicuna-13B. These models, all incorporating strong visual encoders from the CLIP family, demonstrated effective improvements with STIC. While our current exploration of different models was constrained by time and computational resources, we recognize the importance and potential of exploring a wider range of LVLM models. Future work could investigate models with diverse architectures and LLM backbones, such as Llama-3, to further explore the potential of our proposed method.
---
Rebuttal Comment 1.1:
Title: Invitation for discussion
Comment: Dear reviewer qFQt,
Thank you again for your strong support and constructive feedback. We hope that our responses have adequately addressed your questions. Specifically, we added the additional experiment on scaling up model sizes for STIC and provided discussion on further extending it to various models. We would like to inquire if you have further questions regarding our rebuttal. We are more than happy to discuss any remaining questions and provide additional details.
Thank you once again for your time and efforts on this paper!
---
Rebuttal 2:
Title: After rebuttal
Comment: I keep my original score and tend to accept this paper. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their insightful and encouraging feedback on our manuscript. We are grateful for the recognition of the significant performance gains achieved by STIC (B7oX, s8oQ), the novelty and efficiency of STIC (qFQt, s8oQ), the effective use of unlabeled images (qFQt, B7oX) and the comprehensive analysis and clarity of our manuscript (uhYN).
In response to the comments, we provided additional experiments in the [attached one-page pdf](https://openreview.net/attachment?id=VV3MK6FiS7&name=pdf). Specifically, the results include
1. **Scaling up to 13B model (Table 1)**: STIC is effective in improving larger-scale models as well, with further improvement potential in hyperparameter tuning, data filtering and data scaling.
2. **Quantitative analysis of the prompt quality (Table 2)**: the prompts with better quality (our well-crafted prompt derived from GPT-4) provides better performance of DaR on the original model.
3. **Effect of enhanced image diversity in unlabeled images (Table 3**): a more diverse unlabeled image data can provide better improvement for STIC, which aligns with our ablation study on image distributions.
4. **Effect of more unlabeled data from MSCOCO used in stage 1 (Figure 1)**: STIC scales well to larger datasets.
5. **More qualitative examples (Figures 2 and 3)** showing how STIC helped improve model performance even in mathematical VQAs.
### **Global A1: Explanation into our prompt design and data generation for alignment fine-tuning.**
Our prompt design for the well-crafted prompt focuses on quality and diversity. We use GPT-4 to generate and sample multiple initial prompts, which are then refined through human filtering. To ensure effectiveness, we test these prompts on MSCOCO samples, verifying their ability to produce well-structured and relevant responses from the model. The restrictions we apply to reflect the quality of the prompt include length (prompt must be between 60 to 150 words to balance informativeness and conciseness), diversity (prompt includes at least 3 distinct aspects or questions about the image to encourage comprehensive analysis), and specificity (while being general, the prompt contains at least 2-3 specific cues or keywords that can be adapted to various image contents).
More generally, instead of relying on explicit human labeling for each model generation pair as in RLHF, which can be very expensive, we adopt an "implicit" preference approach. We work under the assumption that responses generated from prompts that have differences in human preference lead to responses of the same preference with high probability. This approach allows us to create effective training data without the need for extensive human annotation.
Our goal is thus not to identify the best or worst prompt for the task, but rather to explore the differences between them. For the design of a good prompt, we aim to guide the model to provide a comprehensive and precise image description. The bad prompts are designed to elicit inaccurate descriptions by setting up a slightly different task (describe objects that would logically exist in the image) for the model. The key is that the discrepancy between good and bad prompts should result in pairs of responses that share the same implicit preference with high probability, which is sufficient for effective DPO training.
Table 2 in our attached PDF presents additional experiments using DaR to demonstrate prompt quality. We compared normal prompts from our main paper (e.g., "Illustrate the details of the picture.") with the hallucination prompts and well-curated prompts used for DPO pair generation. The results show an expected discrepancy in QA performance: hallucination prompts significantly decreased performance, while well-curated prompts maintained a decent performance. We also included results based on a prompt proposed by Llama-3 8B and filtered using the same restrictions. The performance difference between GPT-4 and Llama-3 8B prompts underscores the quality of GPT-4's proposals.
### **Global A2: Progression of stages.**
In the table below, we illustrate the sequential improvement in performance of STIC on ScienceQA. While stage 1 focuses exclusively on enhancing the perception capabilities of the LVLM, it still notably improves performance on downstream VQA tasks. Building on the improved image comprehension achieved in stage 1, stage 2 introduces an enhanced reasoning process that utilizes the model’s self-generated image descriptions and results in an even more significant gain. This enhancement further enables the model to self-augment its prompts with DaR, resulting in the substantial performance gains of 6.4% observed.
| Original | After Stage 1 | After Stage 2 | After Stage 2 with DaR |
| :---: | :---: | :---: | :---: |
| 68.86 | 69.96 | 72.48 | 75.26 |
### **Global A3: Explanation on DaR.**
In line 211-213 and 275-284 of of our submission, we discussed DaR. Here, we provide further explanations. We proposed DaR as an additional and optional step that can be employed during inference time. Instead of directly obtaining the model's response to a particular question, DaR first prompts the model to describe the image, then appends this description to the question to finally obtain the answer:
User: `<image>\nDetail the composition and subjects within the frame.`
Model: `<image description>`
User: `<image>\nImage description:\n<image description>\n<question>`
Mode: `<response>`
This two-step approach helps the model to better contextualize the question by grounding its response in a detailed understanding of the image. **However, as shown in Table 2 of our submission, DaR alone does not notably improve the performance of a plain model. Instead, it shows the most substantial improvement when combined with STIC fine-tuning.** The foundational improvements made by STIC on the model’s image comprehension ability consequently improved the effectiveness of DaR.
Pdf: /pdf/656e2116ff178600a0a4ed9895feeaf80f93f6e3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Critical Evaluation of AI Feedback for Aligning Large Language Models | Accept (poster) | Summary: The paper explores the current experimental set up for learning from AI feedback (LAIF), specifically using demonstrations generated by a LLM that is weaker than the LLM used to assign preference labels both for training and evaluation. By comparing training several different base LLMs on demonstrations generated by weaker (GPT3.5) versus stronger (GPT4, Claude) LLMs both in the absence of LAIF and followed by LAIF, the results suggest that current evidence for the benefits of LAIF (noted to be distinct from learning from human feedback - LHF) is over stated. The conclusion and take away is to better construct the SFT datasets such that the available demonstrations come from LLMs of the same quality level as those used to assign preference labels.
Strengths: - The paper is well written and easy to follow. The arguments are well outlined to make the motivations, benefits of the work, and the learnings clear.
- The issue of ensuring that current and widely used experimental designs are correctly constructed is a vital contribution. The potential need for algorithmic adjustments to learn from AI feedback versus human feedback is something that could kick off a whole new avenue of research and investigation in the field.
- This is probably one of the few papers I have seen that attempts to dig into differences in LAIF performance gains for different base models. While the investigation is cursory, it does provide some evidence and hypotheses that can help direct future investigations, either by these authors or others.
Weaknesses: - Some of the motivation for the experiment set up is incongruous. The authors motivate the paper around the LAIF paradigm (not using LLMs as human proxies for algorithm development), and call out why the claims in the paper should be considered separate from LHF. However, they also highlight the importance of having the same critic as evaluator. In practice the final evaluators for a LLM trained with LAIF methods are humans. Therefore, it would be good to have experiments where humans are the evaluators or the critic and evaluator are not the same LLM.
- A clear distinction between LAIF and LHF is made throughout the paper to highlight that the take aways are for LAIF only. However, it seems some of the take aways discussed in "Current base LLMs are insufficiently responsive to AI feedback" (pg. 7) could apply to LHF. Using strong LLMs as a proxy for humans is valid, and when the experiments are looked at from the perspective of the LAIF as proxies for LHF, more conclusions can be drawn made. Why would the hypotheses about representational space mismatches not be relevant to LHF?
- The experiment evaluating training the base LLM on demonstrations from Claude prior to LAIF is only run on llama. It would be helpful to at least also see results for Mistral 7B so that more of a comparison can be drawn against Figure 3.
- Different experiments are run with different numbers of base LLMs. The same base LLMs should be used across all main experiments, those whose results are in Figures 2, 3, 4, and 7. If fewer base LLMs are used in some experiments, a justification should be provided.
- The SFT results with GPT and Claude should not be separately compared against the LAIF results using the 10% GPT-3.5 demonstrations versus the GPT-4/Claude demonstrations.
- The story around the section "Completions samples from SFT models are substantially poorer than completions sampled from M_{oracle}" should be better motivated with a clear, strong take away.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Which version of Claude was used?
- Why not use AlpacaEval 2.0? Why evaluate against GPT-3 (text-davinci-003)?
- How do these findings connect and relate to "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data" (https://arxiv.org/pdf/2404.14367)?
- How did you decide which base models to use in each experiment?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are limited. There is a single sentence saying that there is only a limited analysis. However, other limitations should be discussed and addressed. For example, what are scenarios under which the analysis/results that was done may not hold? The current experimental set up assumes (implicitly) that the goal is to copy/mimic/match the performance of GPT4/Claude. However, what if someone does not want a final LLM that approximates the distribution of GPT4/Claude? What about in the base of bootstrapping versus distilling? The answers to the questions above are kind of scattered across the paper, but it would be beneficial to have a clear section discussing the boundaries of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for finding our paper well written, and our contributions for experimental design vital. We address your concerns and questions below:
> However, it seems some of the take aways discussed in "Current base LLMs are insufficiently responsive to AI feedback" (pg. 7) could apply to LHF. Using strong LLMs as a proxy for humans is valid, and when the experiments are looked at from the perspective of the LAIF as proxies for LHF, more conclusions can be drawn made. Why would the hypotheses about representational space mismatches not be relevant to LHF?
Thanks for noting this and we agree, and it is possible that some of the base models are insufficiently responsive to HF too (and may partly explain why reward model accuracies for human feedback are low, ~70%). However, in LAIF, the discrepancy between representational strength of the teachers (like GPT-4, Claude) and models (Llama/Mistral-7B) is clearer, and we wanted to be conservative in our claims.
> The story around the section "Completions samples from SFT models are substantially poorer than completions sampled from M_{oracle}" should be better motivated with a clear, strong take away.
We will revise the text to include the takeaway: “Better samples from the student models, generated using better prompts or CoT and similar techniques may generate higher quality samples and allow for LAIF to be more effective”. Please let us know if rephrasing this would be better.
> Which version of Claude was used?
We used Claude-v1. Claude-v2 was found to have a poorer correlation with human judgment in the original AlpacaEval study.
> How do these findings connect and relate to "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data" (https://arxiv.org/pdf/2404.14367)?
The findings are complementary, as the main message of the cited paper is how to sample data for labeling or query reward model in RLHF, whereas our paper analyzes how effective automatic mechanisms such as AI feedback (loosely, our paper studies the choice of reward model and the paper cited in the question studies the data used during optimization).
> How did you decide which base models to use in each experiment?
Our academic access to Anthropic API expired and was not renewed, so our studies with Claude are more limited as compared to GPT-4.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my question and concerns. I will raise my score accordingly | Summary: The paper evaluates the extent to which AI feedback is helpful in aligning large language models (LLMs) within the commonly used two-step method of improving pre-trained LLMs. This method involves first performing supervised fine-tuning (SFT) and then fine-tuning with reinforcement learning (RL) or direct preference optimization (DPO) using preference feedback. The findings indicate that, in some cases, SFT may outperform the two-step LAIF approach.
Strengths: - The paper includes comprehensive experiments and provides a detailed analysis along with hypotheses explaining the results.
- These experiments encompass a wide range of different LLMs and settings, offering good quantitative metrics and analyses.
- I particularly appreciate the bandit experiments; despite their simplicity, they effectively convey the core ideas and strongly support the paper's claims.
Weaknesses: - I am not entirely certain about the claim that “SFT on strong distribution minimizes any improvements from LAIF.” While this was the case for 3 out of the 4 result settings (including both figures 4 and 7), it is difficult to assert this as a general truth. Could the authors rephrase the claim to be more nuanced?
- In the analysis, the authors make general claims that may not hold true in all cases. For example, “AI feedback is effective when there is a discrepancy between the SFT distribution and the evaluator,”, the analysis lacks numerical values, and the claims are not nuanced, even though they do not hold in all cases. However, I do like the conclusion, where the authors emphasize that the claim does not hold true in all cases.
- It would be interesting to see if the same hypotheses hold in more general LLM settings, such as multi-turn instructions and multi-modal foundation models.
Minor things
- Typo in line 331 “via via LAIF”
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Table 1, are the values in brackets variances or confidence intervals? How many repeated runs were conducted?
- When the paper uses phrases like “weaker SFT target distribution,” how exactly is a target distribution determined to be weaker or stronger?
- It seems that the difference in target distribution is based solely on the percentage of total examples used. It would be interesting to see if the diversity of examples affects the improvements from LAIF and to what extent. For instance, what if the total number of examples remains the same, but the diversity of examples in terms of their distribution in an embedding space is different?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Yes, the authors have sufficiently addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for finding our experiments comprehensive and appreciating our bandit experiments! We answer your questions and concerns below:
> I am not entirely certain about the claim that “SFT on strong distribution minimizes any improvements from LAIF.” While this was the case for 3 out of the 4 result settings (including both figures 4 and 7), it is difficult to assert this as a general truth. Could the authors rephrase the claim to be more nuanced?
> In the analysis, the authors make general claims that may not hold true in all cases …
Thanks for the suggestions. We will revise the text to be nuanced, and point the readers to conclusions explicitly for takeaways. We will revise the specific statement to be more nuanced: “We found LAIF to provide minimal improvement when SFT was a dataset of completions from a strong teacher, in 3 out of the 4 cases”.
> In Table 1, are the values in brackets variances or confidence intervals? How many repeated runs were conducted?
The confidence intervals were computed by the automated evaluation in AlpacaEval, which evaluates models on 800 questions (we will clarify this in our text). Our computational budget did not allow for repeated runs per model/dataset/teacher, and we favored distributing our budget over more models and teachers than repeated runs with a smaller set of models and teachers.
> When the paper uses phrases like “weaker SFT target distribution,” how exactly is a target distribution determined to be weaker or stronger?
Thanks for bringing this up, and we recognize the impreciseness of the notion of strength in this context. The strength of the target distribution in this context is used to refer to the strength of the teacher model, and GPT-3.5 is generally agreed to be worse at instruction following compared to GPT-4 (both subjectively, but also benchmarks like LMSys and AlpacaEval).
> It would be interesting to see if the diversity of examples affects the improvements from LAIF and to what extent. For instance, what if the total number of examples remains the same, but the diversity of examples in terms of their distribution in an embedding space is different?
That’s a great suggestion, and we agree it would be good to explore in future work!
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns and questions! I will keep my current score. | Summary: Learning from AI Feedback (LAIF) has become a popular alternative for improving the instruction-following abilities of large language models (LLMs). Despite its popularity, many unresolved questions remain regarding the actual improvements gained through LAIF. The authors address some of these questions, with a focus on where exactly the improvement in the LAIF pipeline is coming from
The authors found that many of the improvements in LAIF are attributed to the differences in the weak teacher LLM (that provides the SFT data) and a strong critic LLM (that provides the preference data). Their empirical evidence demonstrated this issue across a wide range of base, critic, evaluator, and teacher models.
Moreover, the authors suggested two potential explanations for LAIF's ineffectiveness: either the preference dataset is not sufficiently informative, or the base model has inherent limitations. Finally, the authors offer several insightful suggestions for future research in the area of LAIF.
Strengths: - The paper is well-written and easy to follow.
- The authors are addressing a significant problem.
- The experiments were really well designed and performed.
- The authors show the robustness of their observation by running experiments across several models and dataset splits.
- The authors not only identified the problem in LAIF but also provided some possible explanations that enhanced the reader's understanding of it.
- LAIF is an important path forward for improving LLM instruction following capability. Therefore, as outlined in the paper, it is important to systematically identify the problems in LAIF so that researchers can address them.
- The authors address most of my obvious internal questions and thoughts on various experiment details and design decisions.
Weaknesses: - Some of the observations in the paper are straightforward.
- A few more experiments should be included in the paper to complete some of its conclusions.
- The 10% rule doesn't always hold true. In Figure 3 and Figure 4, SFT 10% performs worse than SFT 100%. A better split could improve SFT performance when doing SFT + LAIF, which is important based on the paper's conclusions.
- The LAIF ablation experiments for addressing the LAIF ineffectiveness, either as preference data or as the base model, have issues. The author samples data from Llama or Mistral and trains on the data using Llama or Mistral as the baseline model. However, the results could suffer from overoptimization problems [1].
[1] Scaling Laws for Reward Model Overoptimization by Gao et al 2022
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 64: If the discrepancy between SFT and AI feedback is minimal, then doing SFT can suffice. I am trying to understand what this statement implies; it seems pretty straightforward. If there is no gap between the two, then you do not need the second step.
- Line 206: Missing citation, using completions from M_{teacher} as one of the inputs in the preference pair results, was observed in [1].
- Also, why not use the M_{teacher} to generate both responses to ensure that the preference data is high quality? DPO [2] mentioned this setting in the "DPO outline" section; essentially, you can train on \pi_{ref} on the preferred completions.
- Line 252: Would you agree that M_{critic} does not fully capture the quality of the preference dataset? If so, then comparing M_{critic} versus M_{teacher} is a little odd because M_{teacher} fully affects the quality of the SFT dataset, whereas M_{critic} only partially affects the quality of the dataset. If M_{critic} generated both the chosen and rejected, then you be certain that generations are high quality and the M_{critic} learning signal is good.
- Figures 4 and 7 show that the 10% threshold is ideal for certain models and settings. Have you experimented with a different percentage threshold?
- Missing citations [3], [4], [5]
[1] Coactive Learning for Large Language Models using Implicit User Feedback by Tucker et al. ICML 2024
[2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al. NeurIPS 2023
[3] Starling-7B: Increasing LLM Helpfulness & Harmlessness with RLAIF by Zhu et al. 2023
[4] UltraFeedback: Boosting Language Models with High-quality Feedback by Cui et al. 2023
[5] Peering Through Preferences: Unraveling Feedback Aquisitiosn for aligning Large Language models by Bansal et a. 2024 ICLR
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 10% rule doesn’t always hold
> Figures 4 and 7 show that the 10% threshold is ideal for certain models and settings. Have you experimented with a different percentage threshold?
Thanks for noting this. Using an increasingly larger split for SFT would weaken our claims compare LAIF with SFT (for example, consider using 99% of the prompts for SFT and 1% for LAIF). We heuristically chose a split which provides enough SFT data to provide a good initialization for LAIF, while using most of the prompts for LAIF. We will revise the text to note that this heuristic may not be optimal.
> Line 64: If the discrepancy between SFT and AI feedback is minimal, then doing SFT can suffice. I am trying to understand what this statement implies; it seems pretty straightforward. If there is no gap between the two, then you do not need the second step.
That is the right interpretation → if you do not see any substantial improvement from LAIF, it was not needed in the first place. We will rephrase this sentence to be clearer.
> Line 206: Missing citation, using completions from M_{teacher} as one of the inputs in the preference pair results, was observed in [1].
> Missing citations [3], [4], [5]
Thanks for the pointers. We will add the missing citations in the next revision.
> Also, why not use the M_{teacher} to generate both responses to ensure that the preference data is high quality? DPO [2] mentioned this setting in the "DPO outline" section; essentially, you can train on \pi_{ref} on the preferred completions.
This is a great suggestion, and would be good to explore better in future work. There are two concerns:
- If generating a completion for SFT from the teacher costs 1 unit of supervision per prompt, and generating a preference label (given two completions) costs 1 unit of supervision, our current setup for LAIF uses 2 units of supervision per prompt, compared to SFT which only gets 1 unit per prompt. Generating both the completions using the teacher would mean 3 units of supervision per prompt. We ideally want to compare SFT and LAIF with equivalent amounts of supervision, but our setup is already unfairly biased to LAIF.
- Our preliminary experiments find that sampling both the completions from the teacher underperforms using our current scheme 1 completion from the teacher and 1 from the model. DPO [1] discusses how samples not from the reference model can hurt the performance. There may be ways to improve the pipeline when both the samples are from the teacher (for example, training pi_{ref} on both preferred and dispreferred completions), it requires deeper exploration beyond the scope of this paper.
[1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al. NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns and questions! I will keep my current score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Edit Distance Robust Watermarks via Indexing Pseudorandom Codes | Accept (poster) | Summary: This paper constructs an LLM watermarking scheme which is robust to edit distance perturbations. The construction is carried out carefully in stages. The authors first construct pseudorandom codes (PRC) that are robust to constant fraction substitutions assuming the existence of local weak pseudorandom functions (PRF) (see below for an informal definition of PRFs). Here, PRCs refer to a keyed encoding and decoding scheme $(\mathrm{Enc}_s, \mathrm{Dec}_s)$ that satisfies the following three criteria. Note that for watermarking, the only “message” that needs to be encoded is “this model is watermarked”, which the authors denote by $\emptyset$.
1. **Undetectability.** For poly-time oracle Turing machines, the encoding oracle $\mathrm{Enc}_s: {\emptyset} \to \\{0,1\\}^n$, which is a *random* function that outputs n-bit strings correlated with a *random* secret key s (unknown to the Turing machine), is indistinguishable from a random oracle that outputs purely random n-bit strings.
2. **Soundness.** For any *fixed* n-bit string y, $\Pr_{s}[\mathrm{Dec}_s(y) = \perp] \ge 1-\mathrm{negl}(n)$ (symbol representing NOT watermarked). In words, any fixed string is rejected by the decoder with all but negligible probability. Hence, a random n-bit string y drawn from the uniform distribution (and independently of the secret key s) will be rejected by $\mathrm{Dec}_s$ with high probability.
3. **Robustness.** Denoting by $\mathcal{E} : \\{0,1\\}^n \to \\{0,1\\}^n$ any corruption channel restricted to at most constant fraction substitutions. $\Pr[ \mathrm{Dec}_s(\mathcal{E}(y)) = \emptyset \mid y \leftarrow \mathrm{Enc}_s(\emptyset)] \ge 1-\mathrm{negl}(n)$.
The key property in this PRC construction is its robustness, which follows (non-trivially) from the use of local weak PRFs. Intuitively, *local* PRFs that depend only on log(n) bits are just the right primitives for constant-fraction perturbations. For example, if the string y is corrupted at random coordinates, most of the significant bits of the PRF evade corruption with high probability.
However, the above PRC is only robust to substitution perturbations, not robust to edit distance perturbations. This brings us to their second stage, which is a generic transformation that takes the above PRC to *index* PRCs. The idea behind index PRCs is simple. Because corruption in edit distance can *delete* coordinates of the given string y, we need to repeat its content in some way to protect against deletions. Given a string, say 1001 one way to represent it is through the indices of its support, {1, 4}. After all, n-bit strings are in one-to-one correspondence with subsets of [n]. Thus, one can repeat the “contents” of the given string y, by cleverly storing and repeating the “indices” of y. Significantly expanding on this observation, the authors construct a PRC that is robust to constant fraction edit distance perturbations. The caveat, of interest to practitioners, is that this PRC requires the alphabet size to increase polynomially with the security parameter, a core parameter controlling undetectability (so quality of the watermarked model), soundness, and robustness.
Based on their careful PRC constructions, the paper culminates by constructing a watermarking scheme for LLMs (i.e., general discrete autoregressive distributions) whose watermark can be robustly and reliably detected if the generated text contains a high empirical entropy substring.
**Local pseudorandom functions.** Local PRFs are PRFs which depend only on few bits of the source randomness. That is, for each secret key s, there exists a support set $J \subset [n]$ such that $F_s(x) = G_s(x_J)$, where $F_s : \\{0,1\\}^n \to \\{0,1\\}$, and $G_s : \\{0,1\\}^\ell \to \\{0,1\\}$. For robustness against corruptions to a constant fraction of the coordinates, the authors require $\ell = \log n$. It seems that the locality requirement is incompatible with “strong” PRFs in which adversary (an oracle Turing machine that attempts to distinguish the PRF oracle and the random oracle) has query access. This is a non-issue in this work since the authors use weak PRFs, in which the adversary only has random sample access to the oracles.
Strengths: The contributions of this paper are significant both theoretically and practically. It is rare for a purely theoretical work, which is aesthetically and technically pleasing in its own right, to directly address an important practical problem of wide interest. I believe this paper has the potential to achieve award-worthy quality if its weaknesses are addressed.
**Addresses a practically important ML question.** The widespread influence of LLMs like ChatGPT on our daily lives has raised numerous concerns and has spurred recent interest in watermarking schemes for LLMs. While there have been several recent works on LLM watermarking, this work is the first to achieve *constant fraction* edit-distance robust watermarking schemes with *reasonable* storage requirements for the secret key. Moreover, the authors achieve this by carefully constructing highly non-trivial theoretical objects (PRCs w.r.t. edit distance) which may have other exciting theoretical applications.
**Technical novelty and significance.** The construction of PRCs based on local weak PRFs, and the generic transformation to index PRCs which are robust to edit distance perturbations is novel. In particular, the connection between PRCs and local weak PRFs seems to be novel, but they seem particularly well-suited for each other. Local weak PRFs are insensitive to perturbations in most coordinates of $(x, F_s(x))$, which is exactly what is needed for the robustness of PRCs.
The techniques involved in analyzing these schemes are non-trivial since the corruption channels can perturb *adversarially* (but restricted to at most p corruptions). Moreover, the test statistic which is used to test the existence of watermarks is a sum of (weakly) dependent terms to which standard concentration inequalities do not apply. The authors resolve such technical issues by designing the encoding scheme, using additional randomness, to mitigate the power of the perturbation adversary and using recently developed concentration inequalities that apply to weakly dependent random variables.
Weaknesses: While the paper makes substantial technical contributions, its presentation falls short.
- **Insufficiently polished exposition on application to LLM watermarking (Section 5).** While the paper does a great job in presenting the PRC constructions, the culminating application to LLM watermarking in Section 5 is presented in a very rushed way. The "full detail" presentation in Appendix E is hard to follow as well. The key issue seems to be the lack of separation between the high-level intuition and the technical complications, as well as the poor choice of notation. Theorem statements in cryptography often resemble an alphabet soup, so careful and clear notation is crucial. A few suggestions are:
1. Start Section 5 by explaining the basic idea that connects PRCs and watermarking schemes. The basic idea is that for each token generation, we first sample $y \leftarrow \Sigma$ from the PRC alphabet $\Sigma$, and use rejection sampling with the condition $\phi(v) = y$ to sample the next token $v$, where $\phi : \mathcal{V} \to \Sigma$ is a hash map.
2. In Algorithm 3, using $p_i$ to denote a *distribution* when $p_0$ and $p$ have already been used to denote constants in $(0,1)$ is confusing. Perhaps $\mu_i$ might be less confusing?
3. The cluttered subscript notation under the $\Pr$ sign is both misleading and hard to parse. It's misleading because it does not fully specify all the randomness. For example, in Eq. (1) (page 5, line 185) the randomness in $\Pr$ not only depends on the $\mathsf{sk} \leftarrow \mathsf{KeyGen}$, but also on the internal randomness of the "adversary". It might be less confusing to enumerate all elements of randomness as was done in Appendix C.3 (page 20, line 811). In addition, expressions like Eq. (37) in Claim E.10 are quite overwhelming.
- **Lack of formal description of the perturbation channel.** There seem to be two unrelated "adversaries" to the watermarking scheme. The "detection adversary" which is a poly-time oracle Turing machine that attempts to detect the watermark, and the "perturbation adversary", i.e., the perturbation channel, which is a (potentially randomized) function that perturbs the generated text. It might be worth highlighting that the modeled perturbation channel is *static*. That is, the same (potentially randomized) function is applied to the generated text and robustness only holds w.r.t. this *static* adversary. This seems to be an important distinction since robustness against a *dynamic* perturbation adversary which interacts with the watermarking scheme over many rounds appears to be impossible, as the authors note (page 1, line 34-35).
> *it is necessary to strike a balance in terms of the power of adversaries to which a watermarking scheme enjoys robustness*
- **Contextualizing the local weak PRF assumption.** It's not clear to me why existence of local weak PRFs is weaker than the existence of public-key cryptography. Is there a formal reduction showing that public-key cryptography implies local weak PRFs? I don't think the current explanation based on Impagliazzo's worlds is adequate. Without context, it's unclear what being in Minicrypt or Cryptomania even means. Is there a single "complete" primitive that generates all cryptographic tools in each world? Do the worlds form a strictly increasing sequence (in set inclusion)?
**Minor comments**
- In Section 4, when introducing *indexing* PRCs, it might be helpful to include a diagram demonstrating the "index" encoding. For example, one can draw a diagram showing that 100101 maps to {1, 4, 6}.
- The hyperref for "Line 8" in (page 6, line 248) is incorrect. It should link to Line 8 in Algorithm 2, not Algorithm 1.
Technical Quality: 3
Clarity: 2
Questions for Authors: - It seems that the locality requirement is incompatible with “strong” PRFs in which adversary (an oracle Turing machine that attempts to distinguish the PRF oracle and the random oracle) has query access. For example, if an adversary has query access, then it can query the all zeros string and its 1-Hamming neighbors. Then, the adversary can decide that the unknown oracle is a PRF If the label bit is significantly insensitive to these 1-bit perturbations. Is this the right intuition?
- Is there an object analogous to a local weak PRF for continuous domains? Local PRFs seem to be the right objects for countering "sparse" perturbation channels which affect only a few coordinates of the sample. What would be appropriate for perturbation channels in continuous domains (e.g., $\ell_2$ bounded perturbations)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments. Regarding your comments on presentation: we will adjust our exposition accordingly. Regarding local weak PRFs, we do not know of a formal reduction showing that public-key cryptography implies local weak PRFs. We will adjust the discussion to add some additional background from reference [30] (Impagliazzo, 1995) on the meaning of the five worlds (which is somewhat informal, as mentioned on p.3 of Impagliazzo's paper).
Question 1: Yes, the locality requirement is incompatible with strong PRFs which give the adversary query access, since $\log(n)$-juntas can be learned in poly(n) time (See [BL97] below). Roughly speaking, the idea of this result is that for a $\log(n)$-junta $F(\cdot)$, an adversary finds x,y so that $F(x) \neq F(y)$, then queries vertices on a path in the hypercube between x and y, which yields at least 1 of the $\log(n)$ influential coordinates. It then recurses to find all influential coordinates. It can then learn $F$ by trying all possible n combinations of the $\log(n)$ influential coordinates.
[BL97] "A. Blum and P. Langley. Selection of relevant features and examples in machine learning, 1997"
Question 2: $\ell_2$ perturbations could be reasonable, though it is unclear if local PRFs can be used to achieve robustness to such perturbations. This is an interesting direction for future work.
---
Rebuttal 2:
Comment: Thank you for addressing my questions and being receptive. I am quite excited by the results and would be happy to see this paper accepted.
Additionally, I believe that the theoretical merits of this paper alone justify its acceptance. The proposed watermarking scheme is has negligible Type-1 error, *provably* robust in edit distance, and satisfies "undetectability", which means that the watermarked model is *computationally indistinguishable* from the original model. The fact that we can satisfy all three criteria (soundness, edit distance robustness, and undetectability) is already surprising and highly non-trivial. | Summary: This paper addresses the challenge of watermarking the outputs of language models with provable guarantees of undetectability and robustness to adversarial edits. The authors propose a novel watermarking scheme that maintains these properties even when subjected to a constant fraction of insertions, deletions, and substitutions. This work builds upon previous schemes that could handle only stochastic substitutions and deletions, thereby providing a more comprehensive solution. The key contribution is the development of pseudorandom codes (PRCs) that are robust against such edits, achieved through an indexing approach and relying on weaker computational assumptions. The paper demonstrates the theoretical foundations of this scheme and outlines its potential applications in preventing misuse of AI-generated content.
Strengths: 1. The paper introduces a new watermarking scheme, which can handle a broader range of adversarial edits, including insertions and deletions.
2. The authors provide a thorough theoretical analysis and proof of the robustness and undetectability of their proposed scheme, grounded in cryptographic principles.
Weaknesses: 1. The theoretical nature of the work might pose challenges in practical implementation, especially concerning the scalability and efficiency of the proposed watermarking scheme.
2. The paper lacks empirical evaluation of the proposed watermarking scheme. Without experiment data, it is very hard to assess the effectiveness of the proposed method in realistic scenarios.
3. The requirement for a constant entropy rate in the generated text might not be met by all language models, potentially limiting the scheme's effectiveness in low-entropy contexts.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What are the specific steps required to implement this watermarking scheme in existing language models, and how does it impact the model's performance in terms of speed and resource consumption?
2. How does the proposed scheme perform against paraphrase and translation attacks?
3. Watermark stealing attack has been proposed recently [1,2,3], which can infer the parameters of the watermarking scheme and remove the watermark from the text. How does the proposed scheme perform against watermark stealing attack?
[1] N. Jovanović, R. Staab, and M. Vechev, “Watermark Stealing in Large Language Models.” http://arxiv.org/abs/2402.19361
[2] Q. Wu and V. Chandrasekaran, “Bypassing LLM Watermarks with Color-Aware Substitutions.” http://arxiv.org/abs/2403.14719
[3] Z. Zhang et al., “Large Language Model Watermark Stealing With Mixed Integer Programming.” http://arxiv.org/abs/2405.19677
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. The scheme's scalability to large-scale language models and extensive datasets remains untested, which might be a critical factor in practical applications.
2. The requirement for a large alphabet size might necessitate modifications in existing tokenization schemes, posing a barrier to seamless integration.
3. The effectiveness of the watermarking scheme is contingent upon the entropy rate of the generated text, which might not be uniformly high across different language models and applications.
4. While the theoretical foundations are robust, empirical validation through extensive experimentation on various language models and datasets is needed to ascertain the scheme's practical efficacy and resilience to adversarial attacks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
Many of the weaknesses and limitations you mention relate to the lack of implementation and experiments. We want to emphasize that developing a fully practical watermarking scheme for immediate use in LLMs is not the point of this paper. Rather, the purpose is to lay the theoretical foundations for a technique to achieve edit-distance robust undetectable watermarking (for which there are not yet any practical schemes), and that additional work is needed to make our scheme fully practical.
Question 1: The computational cost of generating each next token for our watermarking scheme is linear in the alphabet size of the LLM. Note that it takes linear in alphabet size time to even generate non-watermarked output. We expect that the actual empirical (constant factor) blowup in computational cost will be quite small due to the relative simplicity of our procedure. That being said, we remark that our scheme requires a large alphabet size, which poses challenges for a direct implementation of it on existing LLMs; see the response to Question 1 of Reviewer YwkB for further details on this point.
Question 2: If the paraphrase attack produces text which is close to the original in edit distance (i.e., if it is not an overly aggressive paraphrasing), then our scheme would still detect paraphrased watermarked text. To deal with a translation attack, one could augment our detection algorithm to translate its input text into many common languages and output True if any of the resulting translations is detected as watermarked. This would overcome the translation attack assuming that translating a string x from one language to another and back leads to a string x' which is close to x in edit distance, which we expect to be a reasonable assumption.
Question 3: Thank you for pointing out the papers on watermark stealing attacks. While ruling out such attacks is beyond the scope of our paper, we are hopeful that our PRCs and watermarking schemes (as well as those of [Christ & Gunn, 2024]) are resistant to such attacks. Roughly speaking, we believe that it might be possible to show this using the relatively strong cryptographic properties of local weak PRFs (or, in the case of [Christ & Gunn, 2024], the relatively strong cryptographic properties of the noisy parity problem). Establishing such robustness to stealing attacks is an interesting direction for future work, even for the case of subsitution PRCs as in [Christ & Gunn, 2024].
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my question. I would like to raise my score due to your solid theoretical analysis. However, I still believe that including empirical validation in your paper would enhance its overall strength. | Summary: This paper proposes a new pseudorandom code (PRC) called an _indexing PRC_ over a polynomially-sized alphabet that is robust to a constant number of adversarial edit corruptions (insertions, deletions or substitutions). It is constructed as a wrapper around a substitution-robust PRC, which has been studied in prior work. The paper also proposes a new substitution-robust PRC that relies on weaker cryptographic hardness assumptions than prior work. These cryptographic primitives are then used to construct a watermarking scheme for autoregressive models (e.g. LLMs), which is proven to satisfy three important properties: undetectability, soundness and edit robustness.
Strengths: **Originality and significance:**
The paper builds on recent work by Christ & Gunn (2024), who developed pseudorandom codes (PRCs) over binary alphabets that are robust to adversarial substitutions and limited deletions. This work makes several advances in PRCs, which enables an even more compelling application to watermarking of autoregressive models:
- A new substitution-robust PRC is proposed based on local weak pseudorandom functions that follows from weaker average-case hardness assumptions than Christ & Gunn’s PRC
- A new indexing PRC is proposed that is robust to a constant fraction of adversarial edits
- The indexing PRC is used to develop a watermarking scheme for autoregressive models with large alphabets (whereas Christ & Gunn considered a model with a binary alphabet).
**Clarity:** The authors have done a good job of distilling quite complex/technical work into 9 pages. Despite not having a background in cryptography, I found the paper reasonably accessible. I appreciated the summary of the main results on p. 3-4.
Weaknesses: 1. While the paper is motivated by watermarking of generative models, most of the paper’s contributions are in cryptography. Roughly two-thirds of the paper covers pseudorandom codes, while the final connection to watermarking (Section 5) consumes only half a page. I wonder whether another venue would be a better fit for the work – both in terms of the audience and the ability of reviewers to assess correctness.
1. After having read the paper, it’s not clear to me whether the proposed watermarking scheme can be implemented in practice or not. I think the work is valuable whether the scheme is practical or not, however it’s important to be upfront about practical limitations (if there are any) to help guide future work.
1. Related to the above point about practicality: there is no empirical evaluation of the watermarking scheme. This is not a major issue in my view, as the paper makes very strong theoretical contributions.
1. I wonder whether edit robustness is a desirable property for a watermarking scheme. Consider the case of a vendor who is concerned about safety. An adversary could query the vendor’s model, receive safe watermarked output and then substantially edit the watermarked output to make it unsafe. The watermark is preserved in the unsafe output since it is edit robust, provided the number of edits falls below some threshold. This could be problematic for the vendor’s reputation, as the adversary can assert that the vendor’s model generated unsafe content.
**Minor:**
- eqn (2): Should the dimension of s be $\ell(\lambda)$ rather than $n(\lambda)$? Inconsistency between two terms: one is an expectation over an indicator function, whereas the other is an expectation over the output of $\widetilde{\mathrm{Adv}}^{F_\mathrm{Unif}}$.
- line 217: Output of $\widetilde{\mathrm{Adv}}^G$ is undefined
- line 222: Call to $F_s(\cdot)$ returns a tuple in the space $\\{0, 1\\}^n \times \\{0, 1\\}$, which conflicts with the return type given in line 214.
- line 244: Should $z \in$ be $z \sim$?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are there any barriers to instantiating the proposed watermarking scheme?
- Is edit distance robustness sometimes undesirable for watermarking?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, limitations are discussed on p. 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments.
Minor questions:
- Eq. (2): Yes, dimension of $s$ should be $\ell(\lambda)$. And yes, we forgot an "=1" in the second term.
- Line 217: $\widetilde{Adv}^G$ refers to the output of the algorithm Adv (which is a 0 or 1) when, each time Adv decides to query G, it receives $(x, G(x))$ for a uniformly random $x$.
- Line 222: Thanks, we will clarify that the oracle returns just the output of the function $F_s(x) \in \{0,1\}$ (though of course Adv knows the random input $x$ as well).
- Line 244: Yes, should be $z \sim$.
Questions:
1. Using techniques from our theoretical watermarking scheme to develop a practical implementation for use with LLMs is an important direction for future work. The main limitation of our present theoretical results which may complicate a practical implementation is as follows: The alphabet size $|\Sigma(\lambda)|$ is required to grow exponentially in the inverse of the parameter $\alpha$ (see the statement of Theorem E.2). In turn, the parameter $\alpha$ is proportional to the entropy rate of the text needed to guarantee substring robustness (see Definition 2.6 and the setting of $\beta_\lambda(\ell) = O(\alpha \cdot \ell)$ in Theorem E.2). For typical LLMs, the alphabet size is likely smaller than our required value of $|\Sigma(\lambda)|$ given the entropy rates observed empirically in natural language.
On the other hand, we believe that future work aimed at developing modifications of our watermarking scheme with an eye towards practical implementation will be successful. One idea which seems promising is to simulate a larger alphabet by grouping tokens together, and to aim accordingly for a slightly weaker robustness guarantee.
2. It sounds like you are asking about using watermarking for public attribution. It is indeed correct that the same "Detect" function cannot be used for both (a) detecting watermarked output robustly, and (b) as a sort of signature scheme to attribute text to a certain language model. Section 7.4 of [Christ & Gunn, 2024] proposes a solution to this dilemma by constructing watermarking schemes with "unforgeable public attribution": their scheme has an Attribution function, which is not robust and indicates when a portion of its input text is copied verbatim from the model, as well as a Detect function, which is robust and indicates when a portion of its input is edit-distance near to a string output by the model. The key point is that a "True" output of Detect should not be interpreted as attributing the text to the model.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my questions. I continue to advocate for acceptance, as the paper provides strong theoretical contributions, even though there are practical limitations. I recommend addressing Q1 more explicitly in the paper, as it's likely to be a concern for many in the NeurIPS community (three of the four reviewers asked about it).
---
Reply to Comment 1.1.1:
Comment: Thank you! Yes, we will update the paper to address Q1 more explicitly. | Summary: The paper presents an innovative approach to embed an undetectable and robust watermark into AI-generated texts. The authors takes three main steps to reach the goal. They first design a robust pseudorandom code (PRC) over a binary alphabet, and then turn it into a polynomial-sized alphabet PRC robust to a fraction of edits. Finally, they bridge the PRC and the LLM watermark using some carefully designed mapping function to complement the whole watermarking system. The proposed approach is theoretically proved to be undetectable and robust to a constant fraction of edit attacks.
Strengths: The paper addresses a critical need for theoretically robust LLM watermark.
Originality: Using a designed mapping function to bridge the gap between PRC and LLM watermark and proposing the theoretical robustness is innovative.
Quality: The methodology is rigorously developed.
Clarity: The paper is well-structured and the writing is clear, making the complex concepts accessible.
Significance: The work has the potential to significantly impact the field of detecting AI-generated text.
Weaknesses: 1. Lack of real-world evaluation of the proposed approach.
2. Lack of efficiency comparison with the current watermarking framework.
3. Lack of discussion of trade-off between undetectability and robustness.
4. Lack of scalability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have the authors considered evaluating the proposed method on real-world scenario to better understand its practical applicability?
2. How does the computational efficiency of the proposed method compare to existing watermarking framework?
3. Is there a trade-off between undetectability and robustness? For example, in Lemma D.8, the robustness is achieved at very low $p_0$ (since $C_{\text{rob}}$ should be a large constant value). Will undetectability be affected by low values of $p_0$? If the trade-off exists, how is the trade-off reflected in the theorems of the paper?
4. What are the potential limitations or challenges in scaling the proposed method up for large-scale NLP tasks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Lack of evaluation in real scenario: the authors did not show any experiment done on real LLM. Testing in the real world would provide a better assessment of the method's practical applicability and robustness. Adding some real example of watermarked AI-generated text will signify the problem of undetectability. Showing the real accuracy of detection will bridge the gap between the theory and the practice.
This paper considers "constant-rate" insertion/deletion/substitution attacker model, how about attackers who perform paraphrasing or adaptive attacks?Is the proposed method robust to them?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
Question 1: Using techniques from our theoretical watermarking scheme to develop a practical implementation for use with LLMs is an important direction for future work. The main limitation of our present theoretical results which may complicate a practical implementation is as follows: The alphabet size $|\Sigma(\lambda)|$ is required to grow exponentially in the inverse of the parameter $\alpha$ (see the statement of Theorem E.2). In turn, the parameter $\alpha$ is proportional to the entropy rate of the text needed to guarantee substring robustness (see Definition 2.6 and the setting of $\beta_\lambda(\ell) = O(\alpha \cdot \ell)$ in Theorem E.2). For typical LLMs, the alphabet size is likely smaller than our required value of $|\Sigma(\lambda)|$ given the entropy rates observed empirically in natural language.
On the other hand, we believe that future work aimed at developing modifications of our watermarking scheme with an eye towards practical implementation will be successful. One idea which seems promising is to simulate a larger alphabet by grouping tokens together, and to aim accordingly for a slightly weaker robustness guarantee.
Question 2: Modulo the limitations discussed in Question 1, our watermarking scheme is very efficient (and comparable to, e.g., that of [Christ & Gunn, 2024]). In particular, the computational cost of generating each next token is linear in the alphabet size of the LLM. Note that it takes linear in alphabet size time to even generate non-watermarked output.
Question 3: Yes, there is a joint tradeoff between undetectability, robustness, and blocklength of the watermark, even for substitution PRCs (i.e., Theorem 3.2), which then carries over to our edit-distance robust PRCs, as in Lemma D.8. The precise tradeoff is reflected in Equation (3) in the proof of Theorem 3.2: we show in the proof that the blocklength parameter $N(\lambda)$ to achieve a *fixed* undetectability guarantee must grow exponentially in $\log(1/(1-2p))$ through its dependence on $m(\lambda)$. In other words, keeping the block length fixed, taking $p \to 1/2$ will lead to a weaker undetectability guarantee. This does not manifest itself in the theorem statements since we always think of $p$ as a constant and so values of $p$ closer to $1/2$ result in a polynomially weaker undetectability guarantee for fixed blocklength, which is insignificant compared to the superpolynomial guarantee of undetectability (and we do not distinguish between different polynomials). We remark that the results of [Christ & Gunn, 2024] have essentially the same tradeoff; please see the end of Section E.1 for further discussion on this point.
Question 4: The main challenge lies in handling the tension between the fact that in many large-scale NLP tasks, the entropy rate is relatively low and the resulting alphabet size that our results require would be too large compared to the number of tokens. We believe that there are promising techniques which will be able to alleviate these limitations in future work for practical applications; see Question 1 for more details.
---
Rebuttal Comment 1.1:
Comment: We thank the authors for the response and decide to keep the score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CLIP in Mirror: Disentangling text from visual images through reflection | Accept (poster) | Summary: This paper attempts to address typographic attacks by disentangling visual and language representations. The proposed framework, MirrorCLIP, leverages the observation that visual models struggle to recognize text semantics in mirrored images. By using both original and flipped images, MirrorCLIP compares their features in the latent space to create disentangling masks. These masks aim to separate textual and visual elements more precisely, leading to improved disentangled representations. However, while the experimental results demonstrate improvements on the current dataset, they also prompt questions about MirrorCLIP's actual effectiveness in disentangling visual and textual representations.
Strengths: 1. Clarity and Conceptual Simplicity: The idea behind MirrorCLIP is straightforward, and the writing is consistently clear and accessible.
2. Training-Free Methodology: Employing a training-free approach, MirrorCLIP demonstrates an effective strategy for tackling typographic attacks.
Weaknesses: 1. The proposed MirrorCLIP framework might be easily circumvented. For instance, by overlaying text and its mirrored version on one image, the method's ability to disentangle textual features could be compromised.
2. The hypothesis that "mirrored texts are difficult for visual models" is overly generalized. While this may hold true for most cases, exceptions exist, such as palindromic words like "mom" or ambiguities arising from handwritten text, which can still be recognized or confused by visual models.
3. In Figure 7, the results in the first row show that the textual features do not generate corresponding semantics (e.g., dog and earphones), but rather produce nonsensical words. This raises questions about whether MirrorCLIP truly disentangles the semantics of text or merely separates text-form visual features.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Does the disentangling mask still work if the typography includes both text and its mirrored version?
2. Why do the "textual features results" in the first row of Figure 7 not produce corresponding semantics of the typography?
3. In Table 6, why is the text recognition accuracy as high as 61.03 when textual features are zeroed out? Does this indicate that the disentangling of textual features is insufficient?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1&Q1:** Does … still work … text and its mirrored version? The proposed … might be easily circumvented.
**A1:** Yes, the disentangling mask still works. Although there is a 10.22 drop (59.71 to 49.49) in performance compared to the accuracy with ordinary typography (See Table Ⅱ in attached PDF), MirrorCLIP still achieves disentanglement and defends against typographic attacks.
As shown in Figure Ⅱ(b), we constructed a dataset that contains the original and the mirrored text. Our results revealed that, after adding original and mirrored text, the cosine similarity between image features before and after flipping also exhibited a great decrease from 0.9855 to 0.8566, as shown in Table Ⅳ. As the core idea of our method is to leverage the lack of feature invariance in CLIP when flipping images, MirrorCLIP still can locate the textual components by comparing image features before and after flipping, as shown in the activation map in Figure Ⅱ(b). Moreover, according to Table Ⅱ, MirrorCLIP still achieves disentanglement with 9.73 improvements (39.76 to 49.49) compared to the baseline, and defends against typographic attacks. We suspect that besides semantic information, the positional information of text may also have some impact on the disentanglement of MirrorCLIP. Yet, the performance experiences a decline compared to the accuracy with ordinary typography, due to significant interference from the original and mirrored text.
We somehow disagree. Because defense/circumvention is not our main focus, and our MirrorCLIP is primarily proposed as a disentanglement method. Compared to ordinary typography, typography with original and mirrored text is a targeted strong attack for our method, and this is not common in the real world. We sincerely appreciate your thorough insights, will add a discussion of this in the limitation section, and explore defense methods against such a strong attack in the future.
**W2:** The hypothesis … overly generalized … palindromic … "mom" … handwritten text …
**A2:** MirrorCLIP is capable of managing ordinary palindromes like "did" and "radar" or handwritten text, which change upon mirroring. However, it struggles to achieve disentanglement when dealing with special palindromes like "mom" and "wow". Yet, note that those special palindromes are extremely rare and hence basically have no impact on our hypothesis.
For the case of handwritten text, we have already conducted experiments on 3 real-world typographic datasets where the text is all handwritten and show excellent disentanglement results (Table 4 and Table 5).
For the case of palindromes, we categorized them into two types: ordinary palindromes, where the shape of the words changes before and after flipping ("did" to "bib"), and special palindromes, where the shape of the words remains basically unchanged ("mom" to "mom"). We constructed corresponding datasets: the ordinary palindrome dataset includes 26 words ("dad", "madam", "radar", etc.), while the special palindrome dataset includes 5 words ("wow", "noon", "mom", "nun", "minim"). Both palindromes are illustrated in Figure Ⅱ(c) and Figure Ⅱ(d) in attached PDF. The results are shown in Table Ⅲ in attached PDF. For ordinary palindromes, MirrorCLIP achieves disentanglement with 13.85 improvements compared to the baseline. This is a comparable improvement like other words. However, for special palindromes, MirrorCLIP struggles to achieve disentanglement and only improves the accuracy by 5.29. As special palindromes are quite rare compared to other words, according to the results in Table Ⅲ, their impact is limited.
Thanks for pointing out this. We will include a description of the special palindrome scenario in the limitation section.
**W3-1&Q2:** Why … not produce … semantics … typography?
**A3:** This issue is likely due to the limitations of the Stable UnCLIP model we used for feature visualization. It does not possess the capability to directly generate semantically relevant images when dealing with text-only images. The generated images are often meaningless characters, more examples are shown in Figure Ⅱ(a) in attached PDF.
As seen in Figure 7 first row, after disentanglement, images generated with visual features do not carry textual components, and images generated with textual features do not carry visual components. This shows the effective disentanglement of MirrorCLIP.
**W3-2:** Whether … disentangles the semantics … text-form visual features.
**A4:** Our method can disentangle the textual semantics. This is verified through text recognition. According to the results in Tables 5 and 6, with disentangled textual features, the accuracy of text recognition improved significantly. This indicates the excellent disentanglement capability of MirrorCLIP for features with textual semantics, not only text-form visual features.
**Q3:** In Table 6, why … 61.03 when … zeroed out? Does … disentangling … insufficient?
**A5:** There might be some misunderstanding. **Text recognition accuracy when textual features are zeroed out is actually 23.18 (as shown after visual features (zero) in Table 6), not 61.03**, 61.03 is actually the text recognition accuracy when visual features are zeroed out. We would like to clarify that the label (zero) denotes textual features or visual features obtained by performing Hadamard product of textual or visual masks with image features, as defined in L247. We will clarify the meaning of the label (zero) more explicitly in the revision to avoid any confusion.
The disentanglement of textual features is sufficient based on the large decrease (from 72.51 to 5.29) in text recognition accuracy in Table 6. The text recognition accuracy of textual features obtained with the textual filter is 72.51 while the text recognition accuracy of visual features is 5.29. The large decrease in text recognition accuracy is due to the efficient removal of textual information.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ueRM,
I hope this message finds you well. As the deadline for the discussion phase approaches, we wanted to check in and see if our rebuttal has addressed your concerns. We would greatly appreciate it if you could reconsider your rating based on the responses and updates we've provided.
Best regards from all authors
---
Rebuttal 2:
Title: Thanks for your response
Comment: My primary concern remains whether mirrorCLIP can genuinely achieve semantic-level disentanglement, rather than merely a formal-level disentanglement. The current experimental results do not convincingly address this issue. My specific questions are as follows:
1. Could the authors provide accuracy results using typography semantics as ground truth (similar to Table II but with disentangled textual features)? This would allow for a more objective assessment of mirrorCLIP's performance in semantic disentanglement.
2. Regarding Figure 7, my question pertains to the "textual features results" in the top row, whereas your response focuses on the "image features results" in the bottom row. If the "textual features results" in the bottom row can generate semantically correct images, this would demonstrate the semantic capability of Stable UnCLIP. Consequently, the "textual features results" in the top row should also produce images with specific, corresponding semantics rather than meaningless text.
3. Text recognition appears to address formal-level (texture-like) disentanglement, which does not necessarily demonstrate semantic-level disentanglement. While text recognition can be a preprocessing step for semantic understanding, it does not itself provide semantic-level features.
4. The results in Table II indeed show that mirrorCLIP has some effectiveness against mirrored text attacks, but the reasoning behind this remains unclear. Could the authors provide further analysis and explanation of this experimental result?
5. In Table IV, could you provide the similarity results for the normal typographic attack?
---
Rebuttal Comment 2.1:
Comment: Thank you for your thoughtful response. Before addressing your points individually, we want to clarify a key aspect. Our main disagreement seems to be whether MirrorCLIP extracts true semantic-level features or merely format-level (text-like) features.
We would like to clarify that **all text recognition experiments in our paper are conducted by directly calculating the similarity between the disentangled textual embeddings from MirrorCLIP and text embeddings from CLIP’s text encoder (as shown in the pipeline in Figure 5). No additional network or layers are used (as inferred from your comment about 'text recognition as a preprocessing step').**
**Text recognition experiment Setup**: Compared to the experiment for image recognition, the only adjustments were changing the ground truth from visual categories to typographic categories and modifying the prompt from "a photo of [CLS]" to "text of [CLS]," as detailed in Appendix A.
CLIP's text encoder has been shown to learn robust representations that capture semantics of input text, which has been widely used in text-conditional image generation [1,2] and phrase understanding [3,4]. Our text recognition experiment directly ultilized text embeddings of CLIP and naturally valids semantic-level disentanglement, where the content of typography is predicted by selecting the category corresponding to the text embeddings with the highest similarity to the disentangled textual embeddings from MirrorCLIP. Therefore, we are convinced that our validation is effective. We will supplement this setup more clearly in our manuscript.
[1] Clip-forge: Towards zero-shot text-to-shape generation. CVPR. 2022.
[2] Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 2022.
[3] When does CLIP generalize better than unimodal models? When judging human-centric concepts. ACL workshop, 2022.
[4] Clip also understands text: Prompting clip for phrase understanding[J]. arXiv preprint arXiv:2210.05836, 2022.
---
Rebuttal Comment 2.2:
Comment: Here are our point-to-point responses to your further questions:
1. We think there might be some misunderstanding here. In our paper, all text recognition experiments use typography semantics as the ground truth, as you suggested. Detailed experimental setup has been elaborated above and the accuracy results has been already presented in Table 5, where CLIP's typography semantics recognition accuracy significantly improves after disentanglement.
2. The generation of meaningless text-only images by Stable UnCLIP can be attributed to two main factors:
- **Lack of Optimization on Typographic Images**: As highlighted in [5], Stable UnCLIP was not specifically optimized for typographic images during its training. This limitation makes the model unstable when generating such images. Our validation experiment, shown in Figure II(a), confirms this issue, where generated images often fail to align with the input image’s semantics, sometimes resulting in nonsensical characters.
[5] High-resolution image synthesis with latent diffusion models[C]. CVPR. 2022.
- **Textual-Visual Entanglement**: As discussed in our Limitation Section, while MirrorCLIP is designed for text-visual disentanglement, the separation is not always perfect, which can negatively impact image generation. Our experiments in Figure 7 illustrate that in low-noise scenarios, such as with solid-color backgrounds, the generated images maintain semantic consistency. However, in more complex scenes with multiple elements, the generation process is prone to producing meaningless text.
In summary, the challenges Stable UnCLIP faces with noise sensitivity complicate its performance in generating typographic images, and by extension, the validation of MirrorCLIP. We will elaborate on this issue in our revised manuscript. Nevertheless, we believe that with the ongoing advancements in generative technologies, this challenge will not hinder the validation of MirrorCLIP’s effectiveness. We sincerely appreciate your valuable feedback.
3. We think there might be some misunderstanding here. As introduced in our supplement setup, the text recognition experiment was not a preprocessing step for semantic understanding. Instead, it directly determines the semantic category of typography, much like image recognition, by calculating the similarity between the disentangled textual embedding from MirrorCLIP and the text embeddings. As prior work [1,2,3,4] suggests, these text embeddings contain different category semantics. Thus, the observed performance improvement after disentanglement suggests that MirrorCLIP indeed achieves semantic-level disentanglement, aligning with our claims.
4. We conducted further analysis on how the positioning of original and flipped text within an image, in conjunction with CLIP's position embeddings, contributes to its ability to distinguish content. Specifically, we designed an experiment where the original and flipped text were placed vertically close, separated by only 10 pixels.
The experiment-conducted under the same conditions outlined in Table II (Original and Mirrored Text) — showed that MirrorCLIP’s performance in image classification dropped by 3.72 points (from 49.49 to 45.77 as shown below) when the text was closely positioned. This outcome shows that positional proximity diminishes the model’s classification accuracy due to the reduced impact of positional information. We will discuss and dive deeper into this in the final version.
||imagenet|food|flowers|avg.|
|:--:|:---:|:--:|:--:|:--:|
|random position|45.99|68.21|34.27|49.49|
|vertically close|43.72|62.77|30.82|45.77|
||||||
5. The detailed results of the similarity results for the normal typographic attacks have already provided in Table 1, and we put it as follows.
||imagenet|food|flowers|avg.|
|:--:|:---:|:--:|:--:|:--:|
|normal typographic attack|0.8164|0.8643|0.8074|0.8294|
||||||
It can be seen that compared to various special scenarios, the similarity for normal typographic attacks is lower before and after flipping.
I hope the above answers address your concerns. Thank you again for your feedback, which has prompted us to think more deeply about and evaluate MirrorCLIP. We sincerely look forward to your response.
---
Rebuttal 3:
Comment: **We need to clarify that in our original paper, we have never claimed semantic-level disentanglement as our core contribution. In fact, our core contribution is the observation that CLIP's image embeddings do not exhibit horizontal flip invariance for typography, and we proposed the zero-shot framework MirrorCLIP to achieve the disentanglement of visual and textual embeddings based on this observation. The idea of semantic-level disentanglement was introduced during the rebuttal phase to address your question about whether MirrorCLIP truly disentangles the semantics of text or merely separates text-form visual features, and our experiments confirm that MirrorCLIP is indeed capable of disentangling the semantics of text.**
Detailed answers are shown below.
1. There seems to be a clear misunderstanding on semantics of CLIP's representations. **If CLIP's text encoder merely focused on the shape of text-like images, it would struggle to recognize images that do not visually resemble their textual labels-for instance, identifying an image of a cat, which obviously does not resemble the word "cat".** Given this, our text recognition experiment is far from typical morphological analysis. Apart from the differences in ground truth categories and input embeddings, the process for our text recognition experiments is identical to that of image recognition. Both use CLIP's text encoder to establish the text embeddings of ground truth categories. Instead, it validates how effectively MirrorCLIP preserves and aligns semantic content with text embeddings, demonstrating that our disentanglement process maintains semantic integrity.
Furthermore, to confirm that our text recognition task is based on semantic disentanglement, we conducted an experiment where images containing the typography of "little" were used as input to the image encoder, and the text ["little", "*I*ittle", "litt*I*e", "*I*itt*I*e"] was used as input to the text encoder, where "l" is the lowercase of "L" and "*I*" is the uppercase of "i", and their shapes are almost identical. We then recognize the content of typography by comparing the cosine similarity between the disentangled textual embeddings obtained from MirrorCLIP and the text embeddings from the text encoder. The final prediction probabilities are shown below.
||"little"|"*I*ittle"|"litt*I*e"|"*I*itt*I*e"|
|:--:|:---:|:--:|:--:|:--:|
|predicted probability|**0.9536**|0.0162|0.0298|0.0005|
||||||
Results of the same experiment with images containing the typography of "apple" and text ["apple", "app*I*e", "opple", "opp*I*e", "abble"] are shown below.
||"apple"|"app*I*e"|"opple"|"opp*I*e"|"abble"|
|:--:|:---:|:--:|:--:|:--:|:--:|
|predicted probability|**0.9995**|0.0002|0.0001|0.0002|0|
|||||||
According to the above experimental results, it is evident that our text recognition experiments rely almost entirely on semantics rather than shape.
In summary, we would like to emphasize that our approach differs significantly from standard text recognition methods, where we use CLIP's text encoder to establish the ground truth, an encoder known for capturing the semantics of input text rather than just morphological patterns [1,2,3,4].
[1] Clip-forge: Towards zero-shot text-to-shape generation. CVPR. 2022.
[2] Photorealistic text-to-image diffusion models
with deep language understanding. NeurIPS, 2022.
[3] When does CLIP generalize better than unimodal models? When judging human-centric concepts. ACL workshop, 2022.
[4] Clip also understands text: Prompting clip for phrase understanding[J]. arXiv preprint arXiv:2210.05836, 2022.
---
Rebuttal 4:
Comment: 2. We highlight our conclusion here - due to the limitation of Stable UnCLIP and ours, we cannot generate expected results based on textual semantics and Stable UnCLIP. Actually, Stable UnCLIP is barely able to work in extreme cases. This does not indicate that our disentanglement of textual semantics is insufficient. Instead, we have shown the effectiveness of our disentanglement of textual semantics via contrasting disentangled textual embeddings from MirrorCLIP and text embeddings from CLIP's text encoder. Moreover, below are some key points that we need to clarify compared to [1]:
* As described in Section 7.1 of [1], the image generation method used in [1] is entirely different from ours. **The method in [1] is text-to-image generation with text prompt, whereas our method is image-to-image generation with no text prompt. And the generation model used in [1] was optimized with their proposed models, while ours is not.**
* **It is obvious that the image generation experiments in [1] only can demonstrate that they achieved format-level disentanglement, instead of semantic-level disentanglement.**
* **In the third row of Figure 7 in our paper, the images generated using disentangled textual embeddings contain semantically relevant visual components rather than typography. This clearly demonstrates that our method operates at the semantic level rather than the format level.**
[1] Disentangling visual and written concepts in CLIP. CVPR 2022.
The examples of generated images in [1] demonstrate that they can only control whether they prefer to generate visual or textual components within the same semantic context using a trained projection matrix. This precisely indicates that their work only achieves format-level disentanglement rather than semantic-level disentanglement, as there is no second semantic input provided to the generative model. For example, as shown in Figure 1 of [1], when the input text prompt is "corn", the approach in [1] can only control whether to generate visual components of corn or the typography of the word "corn". This clearly demonstrates that only format-level disentanglement is achieved, not semantic-level disentanglement.
Moreover, the generative model in [1] is optimized with their projection matrix as described in Section7.1 of [1], while ours is not. Despite this, the image quality produced by our method using disentangled visual embeddings is noticeably better than that produced by the "forget to spell" model used in [1]. Additionally, the model we used was trained for images with normal visual components and not specifically optimized for typography. As a result, our generated results are more susceptible to noise interference when dealing with textual embeddings of typography. It is evident that the textual embedding in the first row is much more affected by noise than in the third row of Figure 7, which explains why the first row struggles to generate semantically relevant images.
Furthermore, in the third row of Figure 7, the images generated using textual embeddings of typography contain semantically relevant visual components rather than typography. This clearly demonstrates that our method operates at the semantic level rather than the format level.
I hope the above answers address your concerns. We sincerely look forward to your response. | Summary: This paper proposed a simple yet effective disentanglement framework for CLIP, leveraging the different characteristics of visual and textual semantics when facing mirror reflection and reveals that the CLIP model does not exhibit horizontal flip invariance for text, demonstrating a certain degree of innovation. The framework achieves zero-shot disentanglement of textual and visual features and conducts various experiments to validate the effectiveness of the disentanglement framework utilizing methods such as CAM and image generation. Additionally, it enhances the robustness of the CLIP against typographic attacks without any additional training, surpassing the defense performance of existing methods.
Strengths: The paper is easy to follow.
The proposed zero-shot, training-agnostic method could have similar performance to non-zero-shot methods.
Weaknesses: 1. In the ablation experiments, detailed experimental results of the disentangling framework when dealing with images containing flipped text were not provided.
2. More description of the potential applications of this disentanglement framework in practical tasks should be provided in conclusions.
3. False mathematical notations. Cross multiplication ($\times$) is used throughout equations in the manuscript, which may cause misunderstanding.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Detailed experimental results of the disentangling framework when dealing with images containing flipped text were not provided in Ablation Experiment.
**A1:** Thanks for your thorough review of the paper. We show the detailed experimental results with images containing flipped text in Table Ⅰ shown below. Based on the results of Table 6 and Table Ⅰ, we can see that the vision features obtained through MirrorCLIP achieve high accuracy in image classification tasks when handling both normal text or flipped text. The results will be added in the revision.
**Table Ⅰ:** Results of different features on image recognition with flipped text.
| |original|typographic|
| :---: | :---: | :---------------: |
| image features | 61.38 | **55.97**|
| flipped image features | 61.59 | 37.56|
| visual features |**61.84**|50.30|
| | | |
**W2:** More description of the potential applications of this disentanglement framework in practical tasks should be provided in conclusions.
**A2:** Thanks for your advice. We have initially explored object detection and text segmentation by combining MirrorCLIP with RegionCLIP and SAM. The results show the potential of MirrorCLIP for different downstream tasks or applications. Relevant examples are shown in Figure Ⅰ in attached PDF. By using MirrorCLIP to get the disentangled visual region features of RegionCLIP, we can reduce the influence of textual factors and get more accurate detection results. By using the textual features obtained from MirrorCLIP to generate prompts for SAM, we can achieve text localization within images and perform preliminary text segmentation. In our revision, we will include a description of the potential applications of MirrorCLIP.
**W3:** Cross multiplication ($\times$) is used throughout equations in the manuscript, which may cause misunderstanding.
**A3:** Thank you for pointing out the notation issue. We will correct it and thoroughly check all mathematical notations in the revision.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I want to thank the authors for their rebuttal. The concerns are adequately addressed. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer b2Kp,
Thanks for your recognition of our work and constructive suggestions. We appreciate your careful consideration of the suggestions and are glad to hear you view the paper more positively.
Best regards from all authors | Summary: The paper highlights that CLIP may erroneously identify visual objects due to the influence of textual information, thereby reducing the accuracy of visual object recognition. The objective is to extract more precise visual and textual features from the image. The paper proposes that mirroring the image can preserve the consistency of visual semantics while disrupting textual semantics. Based on this insight, a zero-shot framework has been designed. Specifically, a disentangling mask is generated by inputting both the original and flipped images. Additionally, filters are designed to separate textual and visual factors, resulting in disentangled representations. Experiments using stable diffusion models and class activation mapping (CAM) validate the effectiveness of the proposed method.
Strengths: * The proposed methodology is straightforward and easy to implement.
* The results are comprehensive, covering experiments across various settings, including typographic attacks.
* The appendix contains additional results, demonstrating an extensive empirical effort.
Weaknesses: * The proposed method claims to achieve more precise visual semantics by disentangling the semantics of images. I am curious whether this kind of visual semantics can be generalized to other tasks.
* The paper primarily presents experiments for classification. I am interested in whether this approach can be extended to other tasks, such as object detection.
Technical Quality: 3
Clarity: 3
Questions for Authors: The results are comprehensive. But I am still curious if the disentangled representations can be explored for other downstream tasks or applications. It would be better to have a discussion on this.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1&W2&Q1:** It would be better to have a discussion on whether MirrorCLIP can be explored for other downstream tasks or applications.
**A1:** To explore MirrorCLIP's applications and downstream tasks, we combined it with RegionCLIP and SAM, for detection and text region segmentation. Specific examples can be found in Figure Ⅰ in attached PDF.
For detection, RegionCLIP extends CLIP to learn region-level visual representations, allowing for detailed alignment between image regions and textual concepts. This capability supports region-based reasoning tasks, such as zero-shot and open-vocabulary object detection. However, the vanilla RegionCLIP is susceptible to textual components during object detection tasks. By using the MirrorCLIP framework to disentangle the region features of RegionCLIP, we can similarly reduce the influence of textual factors. In Figure Ⅰ(a), vanilla RegionCLIP mistakenly identified a price tag with text "papaya" as papaya. Moreover, after adding the text "television" on the laptop screen, vanilla RegionCLIP was misled and identified the laptop monitor as a television set. These errors were corrected by replacing the region features with the disentangled visual features obtained through MirrorCLIP. This highlights the potential of MirrorCLIP for applications in object detection.
For text region segmentation, by using the disentangled textual features obtained from MirrorCLIP to generate prompts for SAM, we can achieve text localization within images and perform preliminary text segmentation. Specific examples can be seen in Figure Ⅰ(b). This shows that disentangled features through MirrorCLIP can be used for downstream tasks such as image segmentation. Our future work will continue to explore the applications of MirrorCLIP in various tasks.
---
Rebuttal 2:
Comment: Dear Reviewer J5xD,
Thank you for your recognition of our work and constructive suggestions. We hope our additional experiments have addressed your questions. As the deadline for the discussion phase approaches, if you have any other questions or would like to discuss further, please let us know. We sincerely look forward to your response.
Best regards from all authors | Summary: This paper introduces a zero-shot framework, MirrorCLIP, to solve the confusing issues of CLIP facing text-visual images. Unlike existing methods, this method exploits CLIP’s invariance for visual factors and variance for textual factors of images when horizontally flipped. In particular, this paper reveals the difference in mirror effects between visual objects and text on CLIP representation. It first develops a dual-stream disentanglement framework that generates masks by comparing the original text-visual images with the flipped ones in latent space. Additionally, the designed filters generate textual and visual features, respectively, ensuring disentangling quality.
This paper compares the proposed method with various methods across multiple datasets, including clean images, synthetic typographic images, and real-world typographic images. During the experiments, MirrorCLIP showed better disentanglement effectiveness and quality for the textual and visual parts of the images, as well as robustness for typographic-attacked images. The paper also uses CAMs and generative models to further evaluate the disentanglement performance.
Strengths: 1. The finding of the mirror effects of CLIP is novel. This work analyzes the differences between the effects of visual factors and text factors by horizontal flip. Exploiting this, the work proposes an efficient and simple solution to disentangle textual and visual factors in latent space, and address the issue of CLIP networks caused by text-visual images.
2. The experiment is sufficient, and the performance is excellent. Compared to exiting baselines and SoTA, MirrorCLIP has better performance and robustness when facing image classification on both original images and typographic-attacked images. Furthermore, the qualitative results obtained using the CAM method and the generative method demonstrate MirrorCLIP's disentanglement performance.
Weaknesses: 1. This approach can achieve good disentanglement and solve confusion in text-visual image understanding. However, it would be beneficial to delve deeper into the differences by comparing this approach to the existing CLIP-based method for textual and visual disentanglement in related works.
2. The pipeline is based on CLIP. Providing a preliminary introduction to CLIP would be better. Moreover, adding an image that introduces the concept of textual and visual objects of images will improve the clarity of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** It would be beneficial to delve deeper into the differences by comparing this approach to the existing CLIP-based method for textual and visual disentanglement in related works.
**A1:** Thanks for your constructive advice. Compared to other CLIP-based works, our MirrorCLIP is the only training-free method without additional parameters and data, yet exhibits superior disentanglement performance. Moreover, MirrorCLIP remains the performance unaffected on the original datasets while others may degrade the performance.
Due to space constraints, we briefly introduced CLIP-based methods in L90. We will highlight the differences between MirrorCLIP and others in the revision. Specifically, Lemesle et al. introduced methodological tools from the cognitive science literature to assess the language biases of CLIP, and found that the textual and visual factors of an image do not share semantic representations in CLIP by presenting words that distort image classification across different categories levels. However, they cannot achieve disentangled representations of CLIP. Materzynska et al. disentangled visual and textual features by training different projection matrices and applying them to the CLIP outputs. However, it requires the introduction of additional model parameters and data for training; this also results in a performance decrease on the original datasets.
**W2:** Providing a preliminary introduction to CLIP would be better. Moreover, adding an image that introduces the concept of textual and visual objects of images will improve the clarity of the paper.
**A2:** Thanks for your advice. In our final version, we will add the preliminary of CLIP, along with a more straightforward presentation of visual and textual components of images to enhance the clarity of our work.
---
Rebuttal Comment 1.1:
Comment: My concerns are well addressed and I have no further questions. Thanks!
---
Rebuttal 2:
Comment: Dear Reviewer xWuF,
Thank you for acknowledging our work and putting forward precious suggestions. As the deadline for the discussion phase approaches, if you have any other questions or would like to discuss further, please let us know. We sincerely look forward to your response.
Best regards from all authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
**Please see the attached one-page PDF with added experimental results.**
We sincerely thank all the reviewers for their positive and constructive comments:
* All reviewers appreciate that our paper introduces a simple yet effective training-free approach to disentangle textual and visual factors of CLIP image embedding in latent space (reviewer 1,2,3,4),
* The observation of difference in mirror effects between visual objects and text on CLIP representation is novel (reviewer 1,3),
* The experiment is sufficient and the results are comprehensive (reviewer 1,2,3).
They also voiced several valid concerns. We have been diligently enhancing the paper on multiple fronts, addressing concerns, and providing a point-to-point response. We summarize the changes updated below.
**1. Exploring the potential applications of MirrorCLIP in various downstream tasks.**
To explore MirrorCLIP's applications and downstream tasks, we combined it with RegionCLIP and SAM, for detection and text region segmentation. Specific examples can be found in Figure Ⅰ in attached PDF. By using MirrorCLIP to get the disentangled visual region features of RegionCLIP, we can reduce the influence of textual factors and get more accurate detection results. By using the textual features obtained from MirrorCLIP to generate prompts for SAM, we can achieve text localization within images and perform preliminary text segmentation. These examples demonstrate the potential of MirrorCLIP for various downstream tasks.
**2. Further tested MirrorCLIP's disentanglement capability in various extreme scenarios.**
We further tested disentanglement capability of MirrorCLIP in three special scenarios, including typography with original and mirrored text, ordinary palindromes and special palindromes. We constructed corresponding datasets and conducted experiments, detailed results are shown in attached PDF.
According to the results, when handling ordinary palindromes, where the shape of the words changes before and after flipping ("did" to "bib"), MirrorCLIP can still achieve disentanglement performance comparable to that of handling other normal words. However, when handling special palindromes, where the shape of the words remains basically unchanged before and after flipping ("mom" to "mom"), MirrorCLIP struggles to achieve disentanglement. Yet, due to special palindromes are quite rare compared to other words, their impact is limited.
When handling typography with original and mirrored text, MirrorCLIP can still achieve disentanglement, but there is a noticeable decline in performance. Yet, compared to ordinary typography, typography with original and mirrored text is more like a targeted strong attack for our method, and is not common in the real world. Also, MirrorCLIP is primarily proposed as a disentanglement method, instead of a defense method.
We will discuss MirrorCLIP's performance in these scenarios in the Ablation and Limitation sections of the revision.
**3. More revisions that help enhance the clarity of the paper.**
* We will further clarify the differences between MirrorCLIP and other disentanglement methods.
* We will add the preliminary of CLIP, along with a more straightforward presentation of visual and textual components of images.
* We will correct and clarify all symbols and definitions that could lead to misunderstandings.
Please see our reviewer-specific feedback for detailed information.
Pdf: /pdf/0c1a3e1df42908d7f984d590630194723200f7ed.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences | Accept (poster) | Summary: The paper proposes a multi-frame optical flow estimation model with a novel flow decoder that estimates a flow output for all frames simultaneously. The paper describes this process as a Stream-lined In-batch Multiframe (SIM) pipeline and argues that this leads to efficiency gains when processing video as each multi-frame batch needs to contain each video frame only once, i.e., the video processing can advance by multi-frame batch instead of frame-by-frame. The decoder is termed Global Temporal Regressor and uses an iterative design as common in many recent optical flow methods. In order to estimate multi-flow output the decoder uses a correlation volume for each pair of input images. The first step of the decoder is the output of a "MotionEncoder" which uses the current flow estimate and the correlation volume. These motion features form the input to a temporal feature encoder which is implemented with the Super Convolution Kernels by Sun et al., 2022. Spatial image features are than matched with a cross attention module and also input in the updater for the flow. The paper explains the stacking of feature over time and using the twins transformer for encoding temporal relationship in their Integrative Spatio-temporal Coherence (ISC) modeling. The paper reports experiments with the KITTI and MPI-Sintel datasets and two training schedules. The tables show that the proposed StreamFlow achieves either SOTA or close to SOTA performance. The paper reports that A100 GPU with 40GB of memory is enough to train either the method to work with 3 or 4 images as input to the batch. The paper reports a more detailed breakdown of Sintel results and an ablation study is provided.
Strengths: The stream processing for optical flow is new to the best of my knowledge. In the proposed computationally heavy method, it enables the distribution of the computational effort over multiple output. This idea brings the method to a computational level per frame which is similar to methods such as RAFT. The idea of video processing in batches with multiple outputs has been used in MTTR, a referring video object segmentation by Botach et al., 2022.
The proposed flow decoder GTR is novel as it works with multiple correlation volumes, uses spatio-temporal features as inputs and combines them to an iterative design.
The paper also proposes to append temporal input in space for spatio-temporal feature encoding in the ISC that are the "appearance features" for the flow decoder.
The results shows that the proposed StreamFlow is very capable in estimating flow in the Sintel and KITTI datasets with a training schedule of only Chairs and Things.
The paper contains an ablation study to evaluate the components of the proposed model.
Weaknesses: The multiple input - multiple output processing does only partially address the high computation cost of recent optical flow methods. In particular the high memory costs limits the use of StreamFlow to non-edge devices. Considering that optical flow is a low-level visual cue, this prevents StreamFlow to be used as an optical flow sub-module in other tasks.
The paper overstates the performance gains of the method. In particular, considering the full training schedule StreamFlow does not reach top performance on MPI-Sintel and in Kitti. While the table shows top performance on Kitti, the table does not include the on-line videoflow MOF method which achieves an F1-all score of 4.08 (Section 3 of VideoFlow supplemental material).
The paper does not conduct any experiments with data other than MPI-Sintel and Kitti. The recent large image Spring dataset by Mehl et al., 2023 would be a good choice.
The authors did not include consideration of MemFlow, CVPR 2024 by Dong and Fu which has been available on arXiv since April 2024. As one may argue that this is parallel work, I think their superior results raise questions about the design choices for the proposed method.
Unrolling time into space in the ISC module seems to be a fairly standard trick. It is hard to think of it as a novel contribution, even though this may be the first time that it has been used in optical flow.
Technical Quality: 4
Clarity: 3
Questions for Authors: Is there a relationship between image size and memory requirements and if so can the method be used to process large-scale images, e.g., in Spring?
The MPI-Sintel datasets makes numerous measures for detailed analysis of the flow results available. Can these measures be listed and discussed in the supplemental material?
The efficiency analysis makes claims about the efficiency of StreamFlow but no such data is provided. The only information included seems the number of parameters and the latency of the method. No comparison to other methods is given except with VideoFlow-BOF on latency in the Table 4 (Appendix A.1).
The training EPE is very low. Does this indicate overfitting?
The paper needs some careful proof-reading:
l.27 The consecutive flow goes backwards in time. Please check notation.
l.48 "the aim of bidirectional flows" cannot be understood.
l.83 "features from different in multiple scales" missing word.
l.122 "iterative decoder is the paradigm proposed in RAFT" has been proposed earlier by Hur and Roth in their IRR-PWC, 2019 paper.
l.177,184,200 "could" -> can
Eqn. 2 is very confusing by typing Integration and using superscript and subscript for the time range. Needs better notation.
l.173 jth should be j^{th} (superscript).
Table 2. TransFlow has lower EPE on Matched-clean.
l. 244 Abaltions -> Ablations
l. 306 "it significantly increases during training." fragment, please complete the sentence.
[33] should be updated to the peer-reviewed reference.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper contains a limitation section that raises two relevant limitations of the proposed method: the large memory usage for training and the lack of temporal connections between batches.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The relationship between image size and memory requirements**
Thank you for your question. During inference, the memory is modest, as shown in Line 305. Moreover, the memory usage could be further reduced via packages like ``flash-attention``. We have compared StreamFlow with the recent method MemFlow and find that they share similar memory usage.
(MemFlow : Ours = 1: 1.03)
**Q2: Detailed results on the Spring benchmark.**
Thank you for your suggestions. We would add the results in supplements. Results could be seen in the attached PDF.
**Q3: More efficiency analysis.**
Thank you for your questions. Given that some recent multi-frame methods have not been open-sourced, and their papers lack descriptions of efficiency, as well as details such as network layers and widths, which are sensitive to the results, we have not compared them. We have recently updated methods including MemFlow, as shown in the attached PDF. Notably, StreamFlow, MemFlow, and VideoFlow can all adjust the timing by changing the number of refinement iterations, sacrificing accuracy for speed. Here we set iterations to 15 for all of these methods by default.
**Q4: Concern on overfitting.**
Thank you for your question. In fact, StreamFlow has excellent benchmark test and 0-shot test results in multiple datasets, which demonstrates its good generation ability. Its performance in the training set may be related to the size of the dataset, since Sintel and KITTI only contain 1k+ and 200 samples, respectively. StreamFlow shows leading 0-shot performance on Sintel, KITTI, and the larger Spring dataset.
**Q5: Typos.**
We sincerely thank you for pointing out these issues. In the revised version, we will correct each point and carefully proofread the text again.
---
Rebuttal Comment 1.1:
Title: Rebuttal
Comment: I thank the authors for their rebuttal and the additional results on the Spring dataset. I also would like to acknowledge the new graph giving an overview and comparison of the efficiency of various methods. But unfortunately, I find that the rebuttal only answers some of my questions.
My Q1 was about the relationship between image size and memory, while the authors provided helpful information, the question has not been answered.
My Q2 was a request to report all evaluation parameters for MPI-Sintel.
In weaknesses, I had pointed out that VideoFlow MOF outperforms StreamFlow for accuracy on Kitti, and that MemFlow has reported superior results. While I understand that different experimental setting lead to different results, the strong claims made in the paper about the relative accuracy of StreamFlow seem overstated.
---
Rebuttal 2:
Title: Response from Authors
Comment: Dear Reviewer,
Thank you very much for your valuable and prompt feedback. We sincerely appreciate the time and effort you have invested in reviewing our work and highlighting areas that require further clarification. We are eager to address your concerns as follows:
**Q1: Relationship between image size and memory usage.**
A: With PyTorch 2.2 and flash-attention, using 12 refinements and 4 frames, the GPU memory usage for StreamFlow is as follows. Specifically, when the image size increases by 4 times, the GPU memory usage increases to nearly 4.4 times. When the image size is increased by 9 times, GPU memory usage grows to nearly 20 times. For most current scenarios, StreamFlow still maintains a relatively moderate use of GPU memory. The variation in GPU memory usage may be influenced by underlying optimizations in the framework. We believe the memory usage could be further optimized in the future.
| Image Size | GPU Memory |
| ------------ | ------------ |
| 360x640 | 1.19 G |
| 720x1280 | 5.20 G |
| 1080x1920 | 24.11 G|
**Q2: All evaluation parameters for MPI-Sintel.**
A: We have collected all measures including ``all/matched/unmatched EPE, d0-10, d10-60, d60-140, s0-10, s10-40, s40+`` in Table 2 of the attched PDF. Compared to its two-frame baseline, StreamFlow performs exceptionally well in unmatched areas, validating its effectiveness in addressing occlusion issues. We would add a more detailed analysis for each metric in the supplements.
**Q3: Relative accuracy of StreamFlow.**
A: Thank you for your insightful comments and for pointing out concerns about the descriptions in this work. Our initial intention is to highlight its notable 0-shot cross-dataset generalization results, and we hope it did not come across as overstating its performance. We will refine the expression in the revised version to minimize potential misunderstandings. For instance, we will revise the content between Lines #223 and #230 to: ``From Table 1, we could learn that StreamFlow achieves excellent 0-shot cross-dataset performance on Sintel and KITTI. Compared to previous methods, StreamFlow reduces the 0-shot end-point error by 0.16 and 0.08 on the challenging Sintel clean and final passes, respectively. On KITTI, StreamFlow surpasses the previous 0-shot results with 0.11 and 17.65% lower EPE and Fl-all metrics. Besides, without self-supervised pre-training or bi-directional flows, StreamFlow attains commendable accuracy and efficiency on the challenging Sintel and KITTI test benchmarks using (C)+T+S+K+H schedule.``
Besides, StreamFlow indeed uses experimental settings different from others. However, this may also highlight StreamFlow’s advantages. For instance: (1) On the Spring dataset, StreamFlow was only trained for 180k iterations while outperforming MemFlow, which was trained for 400k iterations with the same batch size and learning rate. (2) For Sintel testing, StreamFlow was trained for 300k iterations on FlyingThings and 180k on T+S+H+K, while MemFlow was initially trained for 120k (FlyingChairs) and 150k (FlyingThings) in a 2-frame setting, followed by 600k iterations on FlyingThings and 600k on T+S+H+K. Despite this, StreamFlow still delivers good results, especially on Spring. (3) As for VideoFlow, it employs more frames and bidirectional flows for training, achieving excellent results on the KITTI dataset. We will update the result in Table 1 and add the related discussion. However, VideoFlow explores the accuracy and efficiency under bidirectional flow estimation, which differs from the focus of StreamFlow on a non-overlapping, continuous unidirectional flow pipeline. Additionally, its latency is significantly higher than that of StreamFlow. Therefore, A more relevant comparison for highlighting the issues that StreamFlow addresses is with the baseline method, Twins-SKFlow.
In the end, we greatly appreciate your valuable comments and the opportunity to clarify these points. Please kindly let us know if you have any further questions or require additional clarification. We highly value your insights and stand ready to provide any further information that could be helpful. | Summary: This work focuses on the task of multi-frame optical flow estimation. It challenges the traditional pair-wise flow estimation approach in multi-frame scenarios, which involves redundant calculations. To address this issue, a new framework is proposed that takes multiple frames as input and predicts successive unidirectional flows in a single forward pass.
Strengths: - The proposed method achieves state-of-the-art performance.
- The authors have conducted lots of experiments in the ablation study to justify their designs.
Weaknesses: - The paper is challenging to follow due to several presentation issues.
Specifically, the `Integration’ operation used in Eq. 2 is not clearly defined. Additionally, from Eq. 5 to Eq. 9, using mathematical notations rather than component names to represent the model components in GTR would improve clarity and conciseness. Furthermore, the main figure requires more detailed captions to enhance comprehensibility.
- Lack of further evaluations.
To enhance the validation of the proposed method, the author should also test their model’s generalizability on the widely used Spring [1] benchmark, as it is a standard in many recent optical flow estimation studies.
- Unclear Core Motivation.
In my understanding, the core motivation of this paper is to enable the `simultaneous prediction of successive unidirectional flows in a single forward pass`, as opposed to making per-pair estimations in multi-frame flow estimation scenarios. However, I observed that there are numerous pair-wise operations among the inputs, such as cost volume calculations, in the `single forward pass`.
I suggest that the authors allocate less space to describing these pair-wise operations and instead focus more on discussing and analyzing their designs that enhance efficiency in the `single forward pass`. For example, providing a detailed analysis of the time complexity of the proposed method compared to other methods would be beneficial.
Additionally, it would be helpful for the authors to include comparisons and discussions regarding recent multi-frame flow estimation methods, such as MemFlow [2].
[1] Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo.
[2] MemFlow: Optical Flow Estimation and Prediction with Memory.
Technical Quality: 2
Clarity: 1
Questions for Authors: Please see the "Weaknesses" section for my questions and suggestions. If the author can address these concerns, I would be willing to consider raising my rating.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Eq.2, Eq. 5~9 could be presented with more clarity and Fig. 3 could be given more captions.**
Thank you for your suggestions. We will revise the formulas for clarity and provide additional captions for Fig. 3 as recommended. We provided a rather general explanation of Integration at Line 169. We will emphasize this expression and provide a more detailed description. Additionally, we will include pseudocode in the supplementary materials for this operation, and release all code and checkpoints once the paper is accepted.
**Q2: Results on the Spring benchmark.**
Thank you for your advice. Whether it was the zero-shot test conducted on the Spring dataset immediately after training on Sintel, or the fine-tuning on Spring, StreamFlow achieved significantly superior overall results and outperformed MemFlow on multiple metrics. It is important to note that StreamFlow was trained for only 180k iterations, which is considerably fewer than the 400k iterations used by MemFlow. Please refer to the attached PDF for details.
**Q3: The redudancy in cost volume and discussions with other recent method such as MemFlow could be included.**
Thank you for your question. In StreamFlow, the cost volume computation is limited to adjacent frames, avoiding redundancy. For instance, with input frames [$I_1$, $I_2$, $I_3$, $I_4$], only correlations [$C_{1,2}, C_{2,3}, C_{3,4}$] are computed for once to derive the flow [$f_{1,2}, f_{2,3}, f_{3,4}$]. The cross-frame information is fused prior to the cost volume calculation via the non-overlapping CSC modules, which is a deliberate design choice in StreamFlow, which explores whether good temporal modeling can still be achieved using a non-overlapping approach.
The work MemFlow (CVPR 24) was posted on arXiv in April 2024, so we had not included it in our comparison previously. It employs the pairwise method and the issue it explores do not overlap with the focus of our work. We have now included a comparison with MemFlow in the attached PDF. StreamFlow demonstrates comparable accuracy with superior latency performance.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you once again for taking the time to review our manuscript. We have tried our best to address the questions you raised (please see our responses in the top-level comment and above) and have revised the paper according to the suggestions provided by all reviewers.
Please kindly let us know if there are any additional questions requiring further clarification. Your feedback is highly valued, and we are more than willing to provide any further information that may be helpful. | Summary: The paper presents StreamFlow, a new optical flow estimation method tailored for video inputs. StreamFlow differentiates itself from earlier methods by incorporating a streamlined in-batch multi-frame pipeline that reduces duplicate computations across consecutive frames, thereby enhancing efficiency. Additionally, the introduction of an ISC module and a GTR decoder allows for effective leverage of spatio-temporal information without an increase in parameter count. Extensive experiments demonstrate that StreamFlow surpasses existing methods on the Sintel and KITTI datasets in both accuracy and efficiency.
Strengths: - The method designs are technically sound and well-motivated.
- Quantitative results on the Sintel and KITTI datasets surpass previous works, demonstrating the proposed method's superior generalization capability.
- The proposed methods achieve faster runtime with a parameter-efficient architecture.
- Extensive ablation studies clearly demonstrate the impact of each proposed module and explore multiple variations.
Weaknesses: - The literature review in Section 2 appears insufficient. It should be expanded to include more detailed discussions of related works such as SKFlow and Videoflow, upon which the architectural designs of this paper are based.
- As I understand it, the primary motivation of this research is to design a non-overlapping inference pipeline for multi-frame estimation. This raises a question about how the non-overlapping inference affects accuracy in terms of frame distance, a topic that seems to be omitted from the paper. For instance, when given three input frames [t-1, t, t+1], a pair-wise pipeline might estimate flow f_t,t+1, and then f_t+1,t+2 using [t, t+1, t+2]. In contrast, if the SIM pipeline computes f_t,t+1 using [t-1, t, t+1], it then predicts f_t+1,t+2 using [t+1, t+2, t+3], thereby missing the opportunity to utilize frame t in estimating the flow for f_t+1, t+2. I would expect this loss of information may become more severe with longer frame lengths. Such analysis should be included in the first row of Table 3 (ablation of SIM Pipeline), yet the ablation study was conducted naively, focusing solely on latency without considering factors that affect accuracy.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please check the weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors described limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: More discussions of related works such as SKFlow and VideoFlow could be included in Section 2.**
A: Thank you for your advice. We will add a more detailed discussion to the literature review in Sec 2. For instance, "To address the issue of occlusion,SKFlow begins by expanding the spatial receptive field and designs effective large convolution kernel modules in the decoder of the flow network, without adding significant computational cost. VideoFlow, on the other hand, approaches the problem from a temporal perspective, employing TROF and MOP modules to utilize multi-frame temporal cues and bidirectional optical flow to effectively mitigate occlusion issues. StreamFlow also starts from the temporal dimension.
Differently, it introduces a new non-overlapping pipeline and explores various temporal modeling strategies to be effective in such settings. It addresses the redundancy problems previous pairwise methods including VideoFlow encountered, and achieves excellent accuracy with latency similar to some two-frame methods".
**Q2: Table 3 should include the analysis of the loss of information.**
A: Thank you for your suggestions. We will include an analysis of this in the revised version. To hierarchically display the results of progressively adding various modules during ablation, we did not include the temporal modeling module in the first part of Table 3 previously, and only changed the pipeline. This allowed a direct comparison with results after adding "Tem. modules" in the second part.
We thank you again for the issue you have mentioned, which does indeed exist. As the frame distance increases, the information provided may decrease, as confirmed by the results in the attached PDF. However, this impact might not necessarily grow larger with more frames, which could be due to the affected frames mainly being distributed at the head or the tail of a group. We could define the longest frame distance that provides effective information as $m$. As the length of the group increases, there will be more frames in the middle of the group (i.e., frames $I_t$ within the interval [$I_{t-m}, I_{t+m}$] all lying within the group), and fewer frames distributed on both ends. As shown in the experiments, the impact was weakened with 4 frames. We will include related discussions in the revised paper, and maybe the future study on an appropriate choice of $m$ is helpful for future multi-frame optical flow work.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you once again for taking the time to review our manuscript. We have tried our best to address the questions you raised (please see our responses in the top-level comment and above) and have revised the paper according to the suggestions provided by all reviewers.
Please kindly let us know if there are any additional questions requiring further clarification. Your feedback is highly valued, and we are more than willing to provide any further information that may be helpful.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their rebuttal and the additional results provided. While there is no compelling reason to oppose publication, and I acknowledge the contributions of the paper, I do not find them to be particularly significant. Therefore, I will maintain my current rating. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
Please refer to the attached one-page PDF that summarizes the added experimental results, which include:
**1. Results on the Spring dataset (CVPR' 23), and the comparison with the recent method MemFlow (CVPR' 24, on arxiv 2404)**
StreamFlow has achieved superior performance on the Spring dataset, surpassing MemFlow. It is important to note that due to limited time during the rebuttal phase, StreamFlow was only trained for 180,000 step instead of 400,000 steps used in MemFlow.
**2. More comparison to other methods on latency.**
Please refer to Figure 1 in the PDF.
**3. Influence of the frame distance.**
Please refer to Table 3 in the PDF.
**4. Detailed results on the MPI-Sintel dataset.**
Please refer to Table 2 in the PDF.
We would like to express our gratitude to all reviewers for providing constructive feedback, which has significantly contributed to the improvement of our paper. We have been working diligently on improving the manuscript in response to your critiques. Please see our reviewer-specific feedback for more detailed information.
Pdf: /pdf/df3bbd3c186598c0a4babc5900c6fde4404edbee.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks | Accept (poster) | Summary: The authors analyze the NTK perspective for PINNs for non-linear PDEs. Previous considerations derived for linear PDEs fall short for non-linear ones and the authors attribute this difference to the non-vanishing Hessian term. Therefore, they suggest to use second order methods and show to be able to converge faster in one experiment. Second order method are shown to be useful also for linear PDEs as they alleviate the spectral bias of the NTK.
Strengths: - the paper is well written and flows very nicely
- I found the analysis about the NTK in non-linear regime elegant and the experiments clearly support the findings
Weaknesses: - the different behaviour of the NTK for non-linear PDEs is not very surprising
- it is also not very surprising that second order method can work better, but in practice they come with significant shortcomings. Even though the method is shown to be faster in one experiment, no rate is derived so it might not be true in general.
However, the applicability of Theorem 4.2 is just sketched (lines 218-222)
Technical Quality: 3
Clarity: 4
Questions for Authors: - I would like the authors to elaborate on the applicability of the second order method. In particular, I found the explanation in lines 218-222 a bit hand-wavy. It seems like the hypothesis of Theorem 4.2 (e.g. J(t) being full-rank) are very hard to check in practice
- No guidance or intuition is provided for practitioners on when it might be convenient to use the second order method, given that the method is expensive (the authors only mention general well-known considerations for second order methods in the limitations paragraph)
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed. However, as mentioned in the question section, I think the authors should discuss in more details the scalability of the proposed second order approach and on when it might be convenient to use it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the kind words regarding the structure and analysis in our paper. While we agree that some of the results might be expected, most of the existing literature focuses on the linear regime and only conjectures what might happen in the nonlinear case. To our knowledge, our paper is the first to characterize the NTK dynamics in the nonlinear case. This characterization is crucial for understanding that the poor behavior of the Hessian can lead to slow and unreliable training of PINNs.
- **[Q1]** We understand the reviewer's concerns regarding the applicability of Theorem 4.2. However, proving the full-rankness of $J$ in the nonlinear case is an open problem that has been widely discussed and remains unresolved. Our contribution is to highlight that the NTK is stochastic in the limit, and for this reason, if one wants to check the full-rankness of $J$, probabilistic methods need to be used. We leave this task to future works. More generally, we do not assert that $J$ is necessarily always full-rank. The remark in lines 218-222 is meant to highlight that the well-conditioning of $J$ is not crucial for a better training when using a second-order method. Indeed, one of the advantages of using second-order methods is that the diagonal of the matrix $D$, even when $J$ is ill-conditioned, only contains ones and zeros.
While a first-order method would yield, in addition to zeros, values that might be extremely small (i.e. on the order of $10^{-14}$ in figure 2(b)). Regardless, this assumption is weaker than the one needed for theoretical convergence guarantees of first-order methods.
- **[Q2]** We thank the reviewer for the interesting question, and we believe that adding a paragraph on this would improve the quality of our paper. In an improved version, we plan to add an intuition on when to use second-order methods. In practice, it may be convenient to use second-order methods whenever there are high frequencies in the solution, when the application requires high accuracy, and when the PDE we aim to solve is nonlinear. One of the goals of our paper is to emphasize that, despite their shortcomings, second-order method are in general a natural choice in the field of PDE solution.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their clarifications | Summary: This paper studies the training dynamics of PINNs, especially for the nonlinear PDEs. The authors find that the previous recognized NTK viewpoint is not applicable to nonlinear PDEs, although it holds for linear PDEs. Therefore, the global convergence of gradient descent on nonlinear PDEs may not be guaranteed. Moreover, the imbalance of singular values of Gram matrices, which also occurs in linear PDEs, results in slow convergence. To address this issue of spectral bias, the paper suggests using second-order methods for parameters update. Experimental results show that LM Newton’s method can achieve lower training loss compared to Adam and L-BFGS.
Strengths: The paper is well-written and provides both theoretical and empirical analyses. The failure of NTK approach on nonlinear PDEs has not been highlighted in previous studies of the global convergence of training PINNs with gradient descent. The second-order method is essential in smooth problems and deep learning training. The authors highlight the effectiveness of Newton’s method in balancing singular values of the training dynamics.
Weaknesses: (1) the failure of NTK is similarly investigated in some previous works (e.g., https://www.sciencedirect.com/science/article/pii/S016727892300341X), where they found that the Gram matrix does not consistently converge in some cases. Therefore, the observation seems to be not novel. They also pointed out that NTK may also work for some nonlinear PDEs.
(2) the second-order methods for modifying the singular values of Gram matrix of training dynamics are also not new.
(3) the Hessian inverse is quite expensive in practice.
(4) regularizing the Hessian may also be intractable in practice, for high dimensional problems, e.g., training PINNs.
Technical Quality: 3
Clarity: 4
Questions for Authors: I have the following questions:
(1) In the previous work (e.g., https://www.sciencedirect.com/science/article/pii/S016727892300341X), it was found that NTK approach fails in solving some PDEs. Besides linear PDEs, it is still possible that NTK works for some nonlinear PDEs. However, in your paper, you exclude the special situation. It seems to be more restrictive than the published work.
(2) Although the second-order methods enjoy very good theoretical properties, deep learning community typically prefers first-order methods due to their computational efficiency. In practice, Newton’s method is more computational expensive even with some inexact technique (e.g., Krylob subspace, conjugate gradient, and LBFGS). However, your theorem of global convergence (Theorem 4.2) does not apply to these inexact methods. Based on your result (which I believe built upon some previous works), can you extend your findings to more practical inexact Newton’s method (e.g., replacing Hessian inverse by Krylov subspace method or Quasi-Newton’s method of BFGS or LBFGS)?
(3) In your experimental results, the LM method performs significantly better than LBFGS. This is surprising, as LBFGS or BFGS asymptotically approaches the exact Hessian under certain conditions. Moreover, in case where the totally number of grid points are large and the batch is small in a stochastic setting, inexact and Quasi Newtons method (e.g., Hessian averaging methods including BFGS and LBFGS) should hold the asymptotic convergence. Therefore, intuitively, I would expect LBFGS to perform at least comparably with your LM method. Am I correct? Can you explain why LBFGS performs poorly according to your results, although it can approximate Hessian and extract Hessian information?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to extend our gratitude to the reviewer for their thoughtful feedback and for bringing this reference to our attention. Now, we will proceed to address the specific questions and concerns raised by the reviewer.
- **Q1** In the paper referenced by the reviewer, the convergence of the Gram matrix at initialization (Theorem 2.2) holds in the non-homogeneous case only for $s\geq1$. The authors state that for $s=\frac{1}{2}$, it is impossible to guarantee universal convergence. This is why our results may seem more restrictive at first glance, but this is due to our focus on the typical NTK scaling $N^{-\frac{1}{2}}$, which allows us to obtain meaningful characterization of the training dynamics using kernel analysis. Furthermore, we explicitly characterize the law of the NTK's limit. In addition to being applicable to any nonlinear PDE, this result enables the leveraging of random matrix theory tools to prove its positive definiteness at initialization, which is a longstanding conjecture in the research community. Hence, we would like to highlight that our results do not contradict those in the referenced paper, even if there seem to be inconsistencies that we are now going to address.
- Regarding the constancy of the Gram matrix during training (Theorem 2.3), we have noticed a problem in the proof of Lemma C.4. In equations 63-68, the authors establish a bound for a single parameter as $ O(1/N^{2s-1/2})$. However, there might be an inconsistency in equation 69, where the same bound is applied to the full vector, which has $O(N)$ components. This discrepancy implies that the bound should be $O(\sqrt{N} \cdot 1/N^{2s-1/2}) = O(1/N^{2s-1})$. Consequently, their theorem would hold for $ s > 1/2 $ instead of $s > 1/4$. Hence, this theorem and our Proposition 3.5 do not contradict each other, since we study $s=1/2$.
- Finally, there seem to be differences regarding the numerical experiments in the reference. However, we noticed that the equations studied in Figures 3 and 5 have a consistent linear part, while our PDE is fully nonlinear. The variation of $K(t)$ during training decreases as $N$ grows because the variation of its linear components decreases, according to our theory. However, the nonlinear components still evolve, and for even larger $N$ (we use a number of neurons several orders of magnitude greater), we observe the plateauing of the plots at a value strictly greater than 0 (note that in their Figure 5, the y-axis starts from 10).
- **Q2 & Q3** To extend our findings to a more practical inexact Newton's method, we can indeed utilize the approach you suggested in Q3, specifically the asymptotic convergence properties of LBFGS and other methods. While this approach is theoretically sound, we must note the following. The theoretical guarantees on the speed of convergence of quasi-Newton methods to exact Newton methods, i.e. their matrix approximation ability, depend on the minimum eigenvalue of the Hessian ([1], Theorem 6). However, as discussed in our paper, the Hessian in PINNs is generally very poorly conditioned. Consequently, quasi-Newton methods may require a practically infinite number of training steps to converge to the true Hessian's inverse and, therefore, to start training higher modes. The aforementioned reasons would explain the intermediate performance of LBFGS, which lies between that of first-order and exact second-order methods.
**[1]** Lin, D., Ye, H., and Zhang, Z. (2022). Explicit convergence rates of greedy and random quasi-Newton methods. Journal of Machine Learning Research.
---
Rebuttal Comment 1.1:
Title: Answer to the rebuttal
Comment: Thank you for the clarification. My concerns are well addressed. I would like to raise my score. To further improve the quality of the paper and make clear claim, I hope authors can include the above related discussions (e.g., the convergence of gram matrices holds for nonlinear PDEs when s>1/2, and the extension (although may fail in practice) to other quasi-newton methods).
---
Reply to Comment 1.1.1:
Comment: We are glad that our reply effectively addressed the reviewer's concerns and we deeply appreciate the increase in score. It will be our pleasure to include in our paper the discussion above. | Summary: The paper studies the NTK of NNs trained on non-linear PDEs, showing that they exhibit different behaviours compared to standard analysis of NTKs. The paper then discusses the issue of spectral bias that arises from first-order methods, showing that they can be alleviated by the use of second-order methods.
Strengths: - The paper presents an interesting analysis of a common tool in NNs and PINNs, and presents an explanation why second-order methods (which are already used in PINNs to some extent) works better than first-order methods.
- The paper provides both theoretical and empirical justification for the various claims, and is well-organised in that manner.
Weaknesses: - The sections could be a bit more coherent. For example, the paper brings up the properties of the NTK in the nonlinear PDE case, but then provides less link of these properties of how it affects the convergence in terms of the spectral biases. The LM algorithm is also brought up as a second-order optimisation method, however it may warrant more description as to why it is introduced or how it differs from existing second-order methods such as LBFGS.
- Explicit mention of LM algorithm's runtime could be mentioned for completeness.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Already in PINNs, there are many works that uses NTKs in loss function scaling [1], collocation point selection [2], analysis of PINN architectures [3], and more. How would the insights in the paper be able to address the points raised in these papers, and how would they affect these proposed methods?
- Is Theorem 4.2 general enough to be applied to regular NNs as well? How does the result compare to existing theoretical works on second-order methods in NNs or general optimisation problems?
[1] Wang et al. When and why PINNs fail to train: A neural tangent kernel perspective.
[2] Lau et al. PINNACLE: PINN Adaptive ColLocation and Experimental points selection.
[3] Wang et. al. On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Limitations suggested are of the LM algorithm which is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive feedback and the constructive comments on our paper. In particular, we acknowledge the second weakness highlighted by the reviewer. We plan to include this information in an improved version of our paper. At present, the reviewer can get a qualitative estimate of the runtime of LM with comparison to that of Adam and L-BFGS by referring to Figure 4(a). Now, let us address the reviewer's questions in the order they were asked:
- **Q1** We would like to emphasize that our intention is not to suggest that our method should substitute other existing approaches, such as the ones mentioned by the reviewer. On the contrary, we believe that combining second-order methods with the enhancements to PINNs mentioned above can yield highly competitive results. This potent combination is demonstrated in Figure 3(a), where we used the LM algorithm together with the method proposed in [3] (as per the reviewer's nomenclature) resulting in fast and accurate convergence on a strongly spectrally biased PDE. On the right side of the same figure, we compare and combine our method with another PINN enhancement, namely curriculum training. Thus, we believe that the LM algorithm, in conjunction with the collocation point selection presented in [2], could also result in positive improvements, especially for engineering use cases where traditional solver often relies on adaptive meshes. We appreciate the reviewer for highlighting this work. Regarding reference [1], the issue of unbalanced loss components is addressed through loss scaling. However, in our work, this issue is implicitly tackled with second-order methods according to the result presented in Theorem 4.2. Indeed, the presence of ones in the convergence matrix $D$ implies, in particular, that the various loss components are balanced during training, producing excellent results when compared to loss scaling. We already have a comparison with [1] in the Appendix, Figure 7, where we referred to the method in [1] as "loss balancing" instead of "loss scaling."
- **Q2** We believe that Theorem 4.2 can also be applied to regular NNs, as there are no specific assumptions on the loss function other than the requirement that the Jacobian $J$ has to be full-rank. However, Theorem 4.2 specifically addresses the issue of spectral bias in PINNs. This issue is a well-known problem for PINNs (even for linear PDEs), but we are not aware of any correlation between spectral bias and worse performance in regular NNs' tasks. Nevertheless, we would like to highlight that there are several papers, such as [4, 5, 6], where general second-order methods have been successfully employed to train regular NNs. Hence, it might be beneficial to extend our analysis to these cases.
**[4]** Z. Yao, A. Gholami, K. Keutzer and M. W. Mahoney, "PyHessian: Neural Networks Through the Lens of the Hessian," 2020 IEEE International Conference on Big Data.
**[5]** Liu, G. H., Chen, T., and Theodorou, E. (2021). Second-order neural ode optimizer. Advances in Neural Information Processing Systems.
**[6]** Vinyals, O., and Povey, D. (2012). Krylov subspace descent for deep learning. In Artificial intelligence and statistics, PMLR. | Summary: In this paper, the theory of the Neural Tangent Kernel(NTK) in the case of solving nonlinear partial differential equations using PINNs is investigated in detail. In particular, it is shown that typical results of the NTK framework do not hold when the simple gradient descent method is employed due to the worse behavior of the Hessian matrix compared to the linear cases. In contrast, when second-order methods are employed, it is theoretically proven that the training of the neural networks is efficient.
Strengths: This paper theoretically investigates the behaviors of learning dynamics for PINNs, which are known to be difficult to train. This paper provides a theoretical guarantee of the effectiveness of second-order optimization methods for training PINNs for nonlinear partial differential equations. Im my opinion, this is a significant result, which may lead to applications of PINNs to practical problems that have been inapplicable due to the training difficulties.
Weaknesses: My concern about this paper is in the increase of the computational complexity of second-order methods; however, this concern has already been discussed by the authors in the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: In the numerical experiments, it seems that not so large neural networks are employed. Is it expected that neural networks of this size behave like the results of the theory?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: There seems to be no problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First and foremost, we would like to express our sincere gratitude to the reviewer for the encouraging feedback and thoughtful comments on our paper.
Regarding the reviewer's question, you are correct in noting that part of our theoretical framework is developed at the NTK level, i.e., for infinitely wide neural networks. However, our result is "negative" in the sense that, even in this idealized scenario, a first-order method might fail to perform effectively when using PINNs to solve nonlinear PDEs. This, as you mentioned, is due to the worse behavior of the Hessian matrix. Consequently, in the more practical scenario of a network with finite width, we cannot guarantee that PINNs trained with first-order methods can accurately solve any nonlinear PDE.
Moreover, we would like to emphasize that Theorem 4.2, which addresses the convergence of second-order methods, is applicable even for networks with finite width. Therefore, while the NTK model serves as motivation to employ second-order methods, the significant findings regarding spectral bias and convergence do not rely on this idealized model.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the detailed reply. Because I have already given a high score, I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response met the reviewer's expectations and feedback. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach | Accept (poster) | Summary: - The paper tackles the problem of graph class-incremental learning. The proposed TPP consists of two modules, that is, task profiling and graph prompting.
- The task prediction is accomplished by learning task prototypes based on graph Laplacian smoothing. Specifically, task profiling aims to accurately predict the task ID of each task during inference by constructing Laplacian smoothing-based task prototypes.
- Graph prompting aims to capture task-specific knowledge into graph prompts. Graph prompting is to avoid the catastrophic forgetting of the knowledge learned in previous graph tasks, which learns a small discriminative graph prompt for each task.
- TPP shows significant performance improvements, as demonstrated by experiments on four graph datasets.
Strengths: - The idea of using Laplacian smoothing to predict the task IDs and graph prompts to capture task-specific knowledge is interesting and new in graph class-incremental learning.
- The proposed method, TPP, is novel, which not only predicts the task IDs of test graph tasks but also distills task-specific information into prompts.
- The proposed method is both replay-free and forget-free and requires the training of a single GNN only once.
- The task IDs of each graph task can be accurately predicted with theoretical support.
- The paper is clearly motivated and well-written.
Weaknesses: - The task prediction relies on the graph transduction setting. The paper focuses on the subgraph incremental learning without considering the inter-edges between graph tasks, resulting in the proposed method not generalizable to other settings effectively.
- Not clear definitions of different categories of methods for graph continual learning. The authors argue that graph prompting can reduce the heavy burdens on optimization and storage with the increasing number of tasks compared to training a separate GNN for each task. However, there are no empirical comparisons.
- Some ablation studies are missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors should point out the key differences of task prediction between image and graph more clearly.
- The authors should include the comparisons between graph prompting and training separate GNNs for each task. Can the proposed task identification method be used for other domains like continual image classification? Since the authors argue that graph prompting can reduce the heavy burdens on optimization and storage compared to training a separate GNN for each task, the authors should include the comparisons between graph prompting and training separate GNNs for each task.
- In Table 3, why is the performance of TPP without task prediction significantly low as the graph prompts learn task-specific knowledge?
- In Fig. 4(a), the performance of TPP becomes stable when the size of the graph prompts is larger than one. The authors should provide a more detailed analysis on this phenomenon. The task prediction relies on the graph transduction setting and the accuracy of task prediction with training nodes can also achieve very high accuracy as shown in Fig. 4(b). Therefore, the task identification can be simply conducted by averaging all the node features as the task prototypes. The authors should compare the proposed method with this approach.
- The authors are encouraged to include clear definitions of three categories of methods for graph continual learning to make the paper more comprehensive. Besides, the descriptions of the baselines should be also included.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, they addresed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on our design and empirical justification. Please see our detailed one-by-one responses below. We will include the new results and discussions below into the paper.
>**Questions #1:** Point out the key differences of task prediction between image and graph more clearly.
The major difference between images and graphs is that images are i.i.d while graphs are non-i.i.d due to the complex structure of graphs. The graph structure brings a unique challenge for the task prediction of graph continual learning. Most existing task prediction methods for images rely on an OOD detector by treating data from other tasks as the OOD data. In this paper, we utilize the graph structure to perform task prediction by using a Laplacian smoothing-based method to profile each graph task with a prototypical embedding and achieve accurate task prediction.
>**Questions #2/Weaknesses #3:** Include the comparisons between graph prompting and training separate GNNs for each task.
In the following table, we report the comparison of TPP and training separate GNNs for each task in terms of additional parameters and the average accuracy, where $F$ is the dimensionality of the node attributes, $d$ is the number of hidden units in an SGC layer and $T$ denotes the number of tasks. From the table, we can see that the proposed TPP method can achieve very close performance to its variant that trains separate GNNs for each task rather than task-specific prompts, while it involves a significantly smaller number of parameters for all tasks in GCIL.
```
Table A1. Additional parameters and the average performance of the proposed graph prompting and task-specific models.
```
|Method | Additional Parameters |CoraFull | Arxiv | Reddit | Products |
|---|---|---|---|---|---|
|Separate Models| $(F+d)dT$|94.3| 86.8| 99.5|96.3|
|TPP|$3FT$ |93.4|85.4|99.5|94.0|
The proposed task prediction method cannot be directly used for other domains if there is an absence of graph structure information between data instances. A similar prototype-based task profiling approach can be explored for data types like images, but designs for generating more discriminative task profiles may be needed, which would be an interesting future research extension to our method TPP.
>**Questions #3:** Why is the performance of TPP without task prediction significantly low as the graph prompts learn task-specific knowledge?
Despite graph prompts capturing the task-specific knowledge, they are learned independently for each task and are not compatible with other tasks. Without the guidance of task identification, node classification is accomplished by concatenating the class probabilities of the test sample for all tasks and choosing the class with the highest probability as the predicted class. Specifically, assume there are $T$ tasks and each task contains $C$ classes, the predicted class of a test sample can be formulated as $c = \arg \max (c_1, c_2, \ldots, c_{T \times C})$. Suppose the test sample comes from task $i$, the incompatibility between the test sample and prompts from different tasks ($j, j\neq i$) results in the class of the test sample being biased to one class of task $j$ with an extremely high probability. As a result, the class of the test sample cannot be predicted correctly.
>**Questions #4/Weaknesses #1 and #3:** Explain the performance of TPP with the size of graph prompts and discuss the task prediction using all node attributes.
In our experiments, each task contains 2 classes (i.e., $C=2$) of nodes and we reported the performance of TPP with different sizes of the graph prompts in Figure 4. The figure shows that the performance of TPP increases quickly from $k = 1$ to $k = 2$ and remains stable for $k > 2$. We attribute this phenomenon to the intuition that the number of learnable tokens in graph prompts with a maximum number of $C$ is often sufficient to instruct the backbone to perform subsequent tasks conditionally. To further evaluate this intuition, we set the number of classes in each task to three ($C=3$) and report the performance of TPP with varying sizes of prompts in the following table. We can see that the performance becomes stable when $k=2$ and $k=3$ for CoraFull and Arxiv respectively, which supports our intuition.
```
Table A2. The average results with different sizes of the graph prompts when each task consists of nodes from three classes.
```
|Datasets|1|2|3|4|5|6|
|---|---|---|---|---|---|--|
|CoraFull|48.9|88.0|89.3|90.2|91.0|91.2|
|Arxiv|64.3|72.2|75.9|76.1|77.0|76.7|
For task prediction by averaging all the node attributes as the task prototypes, it can also achieve good task prediction for most of the datasets (please see the results with the smoothing step set to zero in our response to the Weakness \#4 of Reviewer \#3), but it is not as effective as our proposed TP method. This may be attributed to various situations where this simpler alternative approach fails to work. For example, it cannot distinguish different tasks that may have distinct node attributes but have similar averaged attribute-based prototypes. Moreover, neglecting the graph structure would also result in failures in various other cases, e.g., when different tasks share similar node attributes but have different graph structures. In contrast, the proposed method can address all these cases as shown in Eq.(7).
>**Questions #5/Weaknesses #2:** Include clear definitions of three categories of methods for graph continual learning and the descriptions of baselines.
Existing GCL methods can be roughly divided into three categories, i.e., regularization-based, parameter isolation-based, and data replay-based methods. We will include more detailed definitions of these categories and the descriptions of the baselines in the revision. | Summary: This paper addresses the challenge of class-incremental learning (CIL) in graph data (GCIL) by proposing a novel Task Profiling and Prompting (TPP) approach. It leverages Laplacian smoothing-based task profiling to achieve accurate task ID prediction, thereby mitigating inter-task class separation issues, and introduces a graph prompting method to prevent catastrophic forgetting without the need for data replay. Extensive experiments on four benchmarks demonstrate that TPP achieves 100% task ID prediction accuracy and significantly outperforms state-of-the-art methods in average CIL accuracy while being fully forget-free.
Strengths: - This paper highlights the significant challenge in graph class incremental learning (GCIL).
- It introduces a novel approach in GCIL, task ID prediction, by devising Laplacian smoothing-based task profiling for graph task ID prediction, which shows surprisingly perfect performance.
- Furthermore, extensive experiments are conducted to demonstrate the superiority of the proposed method.
Weaknesses: - The proposed graph prompting method appears to merely apply the technique from [1] to the graph domain, lacking technical novelty.
- The proposed TP method raises several significant concerns:
- The proposed TP method consists of Laplacian smoothing and average pooling without any training. Since Laplacian smoothing and GCN are significantly similar, there is no clear reason why the GCN-based OOD detector (in OODCIL) should underperform compared to the proposed TP. I kindly ask the authors to compare the performance of the GCN-based OOD detector and provide an explanation of the results, with exact numerical values rather than charts. For a similar reason, I do not understand why OODCIL significantly underperforms compared to most baselines. Therefore, please provide the following two sets of results: 1) replacing the task ID prediction module in OODCIL with TP, and 2) replacing the task ID prediction module in TPP with the OOD detector from OODCIL.
- The zero AF scores indicate that the proposed TP perfectly classifies the task ID, which raises suspicions about the task formulation. Specifically, I suspect that each task was formulated in a way that gives TPP an advantage, making it easier to predict the task ID. Therefore, I kindly ask the authors to provide experiments under different task formulations, such as: 1) randomly sampling classes for each task, and 2) splitting the classes in numerical order by the given class number (e.g., (0,1), (2,3), (4,5),...,) and so on.
- The absence of available source code to verify the results amplifies my suspicions about the experimental results.
- In Theorems 1 and 2, the authors assume a sufficiently large number of Laplacian smoothing steps (i.e., $s \rightarrow \infty$). However, this assumption seems flawed because, as $s$ increases, the prototype will collapse due to the oversmoothing problem, making each task indiscriminative. Furthermore, the authors set the value of $s$ to 3 in their experimetns, which is far from a sufficiently large number of steps. I kindly ask the authors to provide experimental results for the accuracy of the task ID prediction module with the value of $s$ varying from {3, 5, 7, ...}. These results should demonstrate that as $s$ increases, the accuracy should increase as well.
[1] Learning to Prompt for Continual Learning, CVPR 2022
Technical Quality: 1
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 3
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on our design and empirical justification. Please see our detailed one-by-one responses below.
>**Weaknesses #1:** TPP is similar to L2P [Ref2].
Although our TPP and L2P both adopt prompting for class-incremental learning, several major differences highlight our novelty.
- **Data:** L2P focuses on continual learning in the vision domain where each instance is independent while TPP addresses the graph continual learning where different nodes are connected by edges and not independent. The complex connections make graph continual learning more challenging.
- **Prompt learning:** L2P follows the key-valued query mechanism to select prompts for inputs and the selected prompts are concatenated with input embedding. In contrast, our prompts operate on raw attributes of nodes and modify them with weighted combinations of prompts. Moreover, L2P employs ViT-B/16 as the backbone, which contains excessive parameters and is well-trained on large-scale auxiliary datasets. However, there are no universal backbones for graph learning. To address this, we learn the GNN backbone based on the first task via graph contrastive learning due to its ability to obtain transferable models across graphs.
- **Forgetting:** Most importantly, L2P does not have task prediction (TP), but TP is the critical mechanism that gives TPP superior performances. By performing TP and learning task-specific prompts, TPP is fully rehearsal-free and forgetting-free. However, the forgetting problem is still a major issue in L2P.
>**Weaknesses #2.1** Why the GCN-based OOD detector (OODCIL) underperforms compared to the proposed TP and baselines.
We agree that Laplacian smoothing (LS) is similar to GCN. However, the proposed Laplacian smoothing-based task prediction (TP) is significantly different from the OOD detector-based method (OODCIL). First, we employ LS to construct the task prototypes without any training to profile each task. By contrast, OODCIL needs to fully train an OOD detector to discriminate the current task and OOD data of other tasks. Second, given a test sample, TP explicitly predicts the task ID by finding the most similar prototype while OODCIL relies on the OOD score to predict the task probability. As a result, the task prediction of OODCIL heavily relies on the performance of the detector. In our experiments, the OOD detector is trained with the current task and OOD data from other tasks. As a result, the OOD detector cannot effectively output the correct OOD score for a test sample, leading to poor performance. Designing more advanced OODCIL may improve the performance, but it is out-of-scope for this paper.
As suggested, we conducted the experiments by changing the task prediction method in OODCIL and TPP and reported the results in the following table. When using our TP method for task prediction, the performance of OODCIL is significantly improved, similar to the performance of our full TPP method (Graph Prompting + TP). On the other hand, the performance of TPP drops significantly if the OOD-based method is used in TPP for task prediction, showing that accurate task prediction plays a critical role in GCIL’s impressive performance.
```
Table A1: The average performance of OODCIL and TPP with different task prediction methods.
```
|Method|Task Prediction|CoraFull|Arxiv|Reddit|Products|
|---|---|---|---|---|---|
|OODCIL|OOD|71.3|19.3|79.3|41.6|
|OODCIL|TP|94.6|84.6|99.6|95.1|
|Graph Prompting|OOD|1.5|4.3|6.0|8.0|
|Graph Prompting|TP|93.4|85.4|99.5|94.0|
Note that Graph Prompting+TP performs similarly to OOD-CIL+TP. It highlights the importance of our graph prompting learning from another perspective. This is because OODCIL has much more learnable parameters than our Graph Prompting as OODCIL trains a separate GNN for each task whereas Graph Prompting trains only a small GNN once and then learns small prompts for each task using a frozen GNN. Furthermore, OODCIL requires data replay, while our graph prompting is replay-free.
>**Weaknesses #2.2:** Accuracy of task prediction with different task formulations.
The following table reports the accuracy of the proposed TP with other task formulations, i.e., descending and random orders, demonstrating that the proposed TP can accurately predict the task IDs in terms of all formulations.
```
Table A2: The accuracy of task prediction with other task formulations.
```
|Task Formulation|CoraFull|Arxiv|Reddit|Products|
|---|---|---|---|---|
|Descending|100|100|100|100|
|Random|100|100|100|100|
>**Weaknesses #3:** Absence of source code.
The source code will be released upon acceptance.
>**Weaknesses #4:** Accuracy of TP with varying steps of Laplacian smoothing.
In Theorems 1 and 2, we assume a sufficiently large number of Laplacian smoothing (LS) steps for task prediction. Specifically, for different graph tasks $\mathcal{G}_i$ and $\mathcal{G}_j$ consisting of different graph data, larger steps result in the train and test prototypes of the same task being the same (Theorem 1) and ensure the prototypes of different graphs are distinct (Theorem 2) simultaneously. In other words, the over-smoothing problem of LS is not the problem that we aim to address but it is the property that we utilize for accurate task prediction. We report the accuracy of task prediction with varying LS steps in the following table. We can see that the task IDs can be perfectly predicted even with one step of LS. This is attributed to the discriminability among node attributes in different tasks, e.g., task identification can be well predicted even without LS.
```
Table A3. The accuracy of task prediction with different Laplacian smoothing steps.
```
|Steps|0|1|3|5|
|---|---|---|---|---|
|CoraFull|97.14|100|100|100|
|Arixv |100 |100|100|100|
|Reddit|100|100|100|100|
|Products|95.65|100|100|100|
**References**
- [Ref2] Learning to Prompt for Continual Learning, CVPR 2022
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough response of the authors. However, I'm still suspicious about your task prediction performance.
Specifically, the authors claimed that "given a test sample, TP explicitly predicts the task ID by finding the most similar prototype while OODCIL relies on the OOD score to predict the task probability. As a result, the task prediction of OODCIL heavily relies on the performance of the detector." However, this explanation does not fully address my concerns. The proposed TP is essentially a neighborhood normalized aggregation using their raw node features. On the other hand, the GCN-based OOD detector includes a GCN encoder that functions similarly through neighborhood aggregation. Moreover, the GCN encoder is trained specifically to discriminate between the current task and other tasks—a step that the proposed TP lacks. Therefore, it remains unclear to me why TP significantly outperforms the GCN-based OOD detectors by such a wide margin. The results presented in the third row of Table A1 (i.e., GraphPrompt + OOD) seem particularly questionable, as the performance is inexplicably low (nearly equivalent to that of a random classifier). The authors should clarify and convince readers of the specific factors that enable TP to outperform a seemingly similar approach (GCN-based OOD detector) so dramatically.
My skepticism is further heightened by the results in Table A3. For the Arxiv and Reddit datasets, the performance reaches "100%" accuracy without any Laplacian steps, which implies that simply utilizing node features can perfectly discriminate between tasks. In other words, task prediction appears to be an exceptionally easy problem that can be fully resolved by merely calculating the similarity of raw node features. If that is the case, why does the GCN-based OOD detector struggle with such an apparently straightforward problem?
In summary, I still have significant concerns regarding the perfect performance reported for the proposed TP:
What specific factors allow the proposed TP to significantly outperform the GCN-based OOD detector?
How does the method achieve perfect prediction performance by merely using raw node features and their prototypes for similarity search?
Given these concerns, I strongly urge the authors to provide the source code, as it is the most effective way to address these issues.
I am very open to discussing these concerns further with the authors.
Best regards,
Reviewer j8FV
---
Rebuttal 2:
Comment: Thank you so much for the insightful comment.
The source code of TPP is provided at https://anonymous.4open.science/r/TPP-1B07/README.md, where we provide the implementations of TPP and OODCIL in "Baselines/tpp\_model" and "Baselines/ood\_model" respectively.
To explain why the proposed prototype-based methods can achieve higher task prediction accuracy than the OOD-based method, we'd like to clarify the key differences between them despite they share similar neighborhood aggregation strategies.
Given a sequence of connected graphs (tasks) $(\mathcal{G}^1, \ldots, \mathcal{G}^T)$, where each task contains a set of unique $C$ classes of graph data, TPP constructs task prototypes for each task at its training stage, denoted as $\mathcal{P} = (\mathbf{p}^1,\ldots,\mathbf{p}^T)$. During inference, the prototype $\mathbf{p}^{\text{test}}$ for the test task is constructed, and task prediction is performed by identifying the most similar prototype in $\mathcal{P}$. Note that **this process in TPP does NOT involve any training and data reply**. Despite the simplicity, the prototype-based task prediction can achieve surprisingly good performance. This is attributed to the discrimination of the graph structure and node attributes between different tasks, as shown in Figure 3 in the paper.
**Different from our training-free and reply-free method, OODCIL requires training an OOD detector for each task using data from the current task as in-distribution (ID) data and the rehearsal data from the other tasks as OOD data, and it then utilizes the OOD score to perform task prediction**. This means there are $T$ OOD detectors after learning $T$ tasks. During inference, a test graph is fed into all the $T$ detectors that obtain an OOD score for each task, and the test graph is predicted to belong to the task with the lowest OOD score. Specifically, let $f_o^t(\cdot)$ be an OOD detector trained for task $t$ and there will be $T$ detectors: $(\{ f_o^1(\cdot), \ldots, f_o^T(\cdot)\})$ after sequentially learning $T$ tasks. Then, for a test graph $\mathcal{G}^{\text{test}}$, ideally, the learned OOD detector $f_o^t(\cdot)$ should yield the lowest OOD score if $\mathcal{G}^{\text{test}}$ comes from task $t$ and output a high OOD score if otherwise. To endow OOD detector $f_o^t(\cdot)$ with such an ability, the ideal case is that we have ID data from task $t$ and the OOD data from all other tasks. However, due to the sequential emergence of graph tasks and the restriction of access to previous tasks, we treat the current graph at task $t$ as ID data and construct the OOD data by a data replay approach (i.e., sampling subgraphs from all previous $t-1$ tasks) in our experiments. The detector $f_o^t(\cdot)$ is then optimized to perform a $(C+1)$-way classification, where the first $C$ entries of the classification probabilities are for ID classes at task $t$ and the $(C+1)$-th probability output is used to define the OOD score.
However, the OOD detector $f_o^t(\cdot)$ can get only limited access to the graph data from all $(t-1)$ previous tasks (i.e., having access to the replay data only). Moreover, when training $f_o^t(\cdot)$, we also do not have any access to graph data of unseen task $j\in(t, T]$. Thus, *due to the lack of sufficient training samples for seen tasks and the absence of the samples of unseen task $j$*, the trained $f_o^t(\cdot)$ can yield a lower score for the task $j$ than the OOD score yielded by $f_o^j(\cdot)$. This means that $f_o^j(\cdot)$ can often produce a lower OOD score for task $j$ than for the other tasks, but the lowest OOD score yielded by $f_o^j(\cdot)$ is still smaller than the OOD score yielded by the other detectors for the same task $j$, leading to incorrect task prediction. For example, in Table A4 below, the OOD score for task 3 yielded by $f_o^1(\cdot)$ is lower than that yielded by $f_o^3(\cdot)$ (note that task $3$ has the lowest OOD score among the OOD scores yielded by $f_o^3(\cdot)$), and the OOD scores for task 4 and task 5 yielded by $f_o^1(\cdot)$ is lower than that yielded by $f_o^4(\cdot)$ and $f_o^5(\cdot)$ respectively. As a result, the OOD scores yielded by OOD detectors trained at earlier tasks are generally very low, e.g., the OOD scores in columns "$f_o^1(\cdot)$" and "$f_o^2(\cdot)$" in Table A4, leading to incorrect prediction of the tasks 3, 4 and 5 to be task 1.
```
Table A4. OOD scores of each test task which are yielded by all OOD detectors on the Arxiv dataset with 5 tasks. The test graph is predicted to be the task ID whose OOD detector yields the smallest OOD score.
```
|Test Graph|$f_o^1$|$f_o^2$|$f_o^3$|$f_o^4$|$f_o^5$|
|---|---|---|---|---|---|
|Task 1|**0.08**|0.92|0.90|0.89|0.94|
|Task 2|0.16|**0.07**|0.86|0.87|0.89|
|Task 3|**0.13**|0.31|0.17|0.92|0.85|
|Task 4|**0.16**|0.42|0.82|0.34|0.96|
|Task 5|**0.10**|0.37|0.48|0.93|0.29|
---
Rebuttal Comment 2.1:
Comment: These two adverse effects lead to inaccurate task prediction of the OOD score-based approach, much less accurate than the proposed prototype-based approach (please see Table A5). This is mainly due to the fact that i) its training has a strong reliance on sufficient OOD data from the tasks other than the current task and ii) this training performance (or the access to other task data) is largely restricted due to the CIL nature. Designing an effective OOD detector for graph data has not been explored in the literature. It is challenging due to the limited access to OOD data. If we continuously update the OOD detector, we will need to handle catastrophic forgetting problem with the OOD detector and its interference with the CIL classifiers as well. So, in this work, we implement a simple approach for OOD detectors, but we agree that it is an important problem for GCIL.
```
Table A5. Task prediction accuracy of three different methods.
```
|Method|CoraFull|Arxiv|Reddit|Products|
|---|---|---|---|---|
|Laplacian Smoothing|100|100|100|100|
|Node Attributes|97.14|100|100|95.65|
|OOD|62.86|10.00|80.00|78.26|
Besides, the reason for GraphPrompt+OOD barely working is that GraphPrompt+OOD has significantly fewer learnable parameters than OODCIL, which leads to less accurate node classification per task. Combined with less inaccurate task prediction by the OOD detection method, the overall CIL performance is remarkably degraded.
We very much hope the source code and our responses have addressed your concerns. We're more than happy to take any further questions if otherwise. Please kindly advise. Thank you very much!
Best regards,
Authors of Paper 8497
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer j8FV,
We have provided detailed replies and newly added empirical results to address your concerns on why the OOD detection method are not as effective as our proposed prototype-based task prediction method. Per your request, we have also released our codes through an anonymous GitHub repository. Could you please kindly check whether they are helpful for answering your questions? We're ready to take any further questions you might have. | Summary: This paper studies the graph class-incremental learning problem with unknown task identities. Specifically, the unknown task identity is the key challenge, and this work proposes a Laplacian smoothing-based graph task profiling approach that is theoretically justified capable of predicting the task identities.
Besides, the forgetting problem is alleviated through a graph prompting approach. This approach learns a graph prompt for each task, such that the classification models for different tasks are separated.
The experiments are conducted on four datasets.
Strengths: The proposed method can achieve 100% task identification accuracy.
Theoretical analysis is also provided for the proposed method.
The proposed method is compared against multiple SOTA baselines and obtains consistent performance improvement.
Weaknesses: The experimental setup is not introduced with details.
The introduction on the background is not clear enough. For example, what is the relationship between identifying the tasks and overcoming the forgetting problem. Is recognizing the tasks correctly enough for avoiding the forgetting problem.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How are the datasets split into different tasks? Will different splitting strategy affect the performance?
2. What is the relationship between identifying the tasks and overcoming the forgetting problem. Is recognizing the tasks correctly enough for avoiding the forgetting problem.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As mentioned by the authors, the main limitation of the work is the limited representative capacity and generalizability of the GNN backbone model in prompting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on our design and empirical justification. Please see our detailed one-by-one responses below.
>**Weaknesses #1/Questions #1:** Not clear experimental setup and the performance of TPP with different task formulations.
Please refer to our reply in **Global Response to Shared Concerns** in the overall Author Rebuttal section above for this concern.
>**Weaknesses #2/Questions #2:** What is the relationship between identifying the tasks and overcoming the forgetting problem?
In class-incremental learning, the absence of task ID or identification information requires the test instance to be classified into one of all learned classes, leading to the challenge of inter-task class separation. This can exacerbate the forgetting problem in class-incremental learning. By accurately identifying the task ID, the classification is constrained within the task and the forgetting problem can be largely alleviated as shown in Table 2 in the paper. However, it cannot fully overcome the forgetting problem due to the knowledge interference between tasks when training a single model. In this paper, we further address this issue by learning and storing task-specific knowledge in graph prompts, resulting in the proposed method being forgetting-free. | Summary: The paper proposes a Replay-and-Forget-Free Graph Class-Incremental Learning (GCIL) approach called Task Profiling and Prompting (TPP). This method addresses the challenges of class-incremental learning in graph tasks without relying on task identifiers during inference. By using Laplacian smoothing-based task profiling for accurate task ID prediction and a novel graph prompting approach, TPP eliminates catastrophic forgetting and improves classification accuracy across multiple tasks.
Strengths: - The problem of obtaining a GCIL model being both replay-free and forget-free is interesting.
- The presented empirical results are good.
- The theoretical analysis is interesting.
Weaknesses: - Although CIL problem is interesting, the problem definition in Section 3.1 requires some clarification. Are there any real setting that a sequence of connected graphs (tasks) appear during training, these graphs may be the same but the target tasks are different?
Besides, during testing, why a GCIL model is required to classify a test instance into one of all the T × C classes? Should it be firstly identify its tasks (1 of T), then assign to C classes?
- Likewise, the datasets used in experiments, have been applied for routine scenarios. Why they should be considered into GCIL setting?
Technical Quality: 2
Clarity: 3
Questions for Authors: - Why "Despite being only learned on G1, f(·) can effectively adapt to all subsequent tasks with graph prompts" as written on line 228-229? Will this lead to high fluctuation if we choose different G1? Could the authors provide some empirical results on that?
Please also check my comments above.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on our design and empirical justification. Please see our detailed one-by-one responses below.
>**Weaknesses #1:** The problem definition requires some clarification.
In the **Global Response to Shared Concerns**, we clarify the task formulation in our experiments. In our setting, we follow the widely-used GCIL setting in [Ref1], where a new graph $\mathcal{G}^t$ that connects with previously occurred graphs emerges with a new set of target classes at each time step $t$. The setting may be applied to various real-world applications. For example, in a bank transaction network where each account is a node and the edge can be built when there are transactions between two accounts, the newly emerging graph can be formed by new accounts/transactions with new types (classes) of transactions.
During inference, GCIL aims to classify a test instance into one of all the learned classes (i.e., $T\times C$ classes). Most existing GCIL works directly perform the classification without identifying the task ID of test instances, so their class set for each test node includes $T\times C$ classes. An alternative approach, as suggested by you, is to first identify its task ID, and then perform C-way classification. This is the approach we take in our proposed TPP. This type of methods requires accurate task prediction to guarantee good CIL performance. Inspired by this, we propose the TPP method in this paper, in which the proposed task profiling module shows remarkable task prediction accuracy.
>**Weaknesses #2:** Why these datasets should be considered into the GCIL setting?
Following [Ref1], we employ CoraFull, Arxiv, Reddit, and Products as the benchmark datasets for GCIL and follow the same task formulation. This ensures fair comparisons to all methods. The reason for choosing these datasets for GCIL can be attributed that they contain multiple classes so that a diverse set of tasks can be constructed to evaluate the performance of GCIL methods.
>**Question #1:** The performance of TPP with different task formulations.
Our method can achieve stable CIL performance with different starting and subsequent graph compositions. Please refer to our **Global Response to Shared Concerns** in the overall Author Rebuttal section above for newly added empirical results that justify this ability.
**References**
- [Ref1] CGLB: Benchmark Tasks for Continual Graph Learning. NeurIPS 2022 Datasets and Benchmarks Track. | Rebuttal 1:
Rebuttal: Dear all reviewers,
Thank you very much for the time and effort in reviewing our paper, and for the constructive and positive comments. Our rebuttal consists of two parts: **Global Response** where we address the shared concerns from two or more reviewers and **Individual Response** where we provide a detailed one-to-one response to address your questions/concerns individually.
>**Global Response to Shared Concerns**: The performance of TPP with different task formulations.
For the task formulation in our experiments, we set each task to contain two different classes of nodes and follow the commonly used task formulation strategy in [Ref1] to have fair comparisons with the baselines. Specifically, given a graph dataset with many classes, we split these classes into different tasks in numerically ascending order of the original classes, i.e., classes 0 and 1 form the first task, classes 2 and 3 form the second task, and so on. To evaluate the performance of TPP with different task formulations, we further perform the class splitting in two other manners, including numerically descending and random ordering of the two classes per task. In the following table, we report the average performance of the TPP and the Oracle Model with different task formulations.
```
Table A1. Results of average performance of TPP and Oracle Model on datasets with various task formulations.
```
|Task Formulation | Method | CoraFull | Arxiv | Reddit | Prodcuts|
|----|----|----|----|----|----|
|Ascending Order (Reported)| TPP| 93.4|85.4|99.5|94.0|
|Ascending Order (Reported)| Oracle Model|95.5|90.3|99.5|95.3|
|Descending Order|TPP|94.5|85.9|99.4|93.9|
|Descending Order|Oracle Model|96.1|91.6|99.5|94.7|
|Random Order|TPP|94.8|86.9|99.5|85.9|
|Random Order|Oracle Model|95.3|91.3|99.7|86.8
From the table, we observe that the proposed TPP method can still achieve comparable performance to the Oracle Model with different task formulations, highlighting the robustness and effectiveness of TPP w.r.t. the formulation of individual tasks. Note that the performances of TPP and Oracle Model both drop on Products with random task formulation. This is attributed to the heavily imbalanced class distribution of Products and the performance is evaluated by the balanced classification accuracy. Specifically, for Products, some classes contain hundreds of thousands of nodes while the number of nodes in some classes is less than 100. The ascending and descending task formulations have a relatively balanced class distribution for each task. However, the random task formulation results in some tasks with heavily imbalanced class distribution. To address this problem, de-biased learning is required. We leave it for future research.
Please also note that TPP learns the GNN backbone only on the first task and is frozen during the subsequent prompt learning. Different task formulations result in the GNN backbone being learned with different first tasks. The above results reveal that the proposed graph prompting enables the learned GNN backbone to effectively adapt to all subsequent tasks despite the backbone being learned on different initial datasets.
As for **Individual Response**, we have provided a detailed one-by-one response to answer/address your questions/concerns after the post of your review.
We very much hope our responses have cleared the confusion, and addressed your concerns. We're more than happy to take any further questions if otherwise. Please kindly advise!
Best regards,
Authors of Paper 8497
**References**
- [Ref1] CGLB: Benchmark Tasks for Continual Graph Learning. NeurIPS 2022 Datasets and Benchmarks Track. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training | Accept (spotlight) | Summary: The paper tackles the problem of model growth during training. The authors focus on the problem of efficient LLM pretraining and analyze a plethora of proposed growing techniques. The paper focuses on and established from the very beginning three clear and important objectives: 1. Comprehensive evaluation for decoder transformer language model training, 2. Generalization of small-scale results to bigger scales and 3. The establishment of clear guidelines for practitioners.
Strengths: The paper is excellently written, with clear goals and intentions from the very beginning. It successfully conveys the message that stacking layers once with a specific growth factor, is the best possible solution.
The evaluation framework is comprehensive and sufficient. The authors analyze 4 dominant growing (although ignoring many others) techniques and present convincing results for "stacking" as the most promising technique. They then present experiments scaling up pretraining tokens and FLOPs and show that results generalize out of the box.
Finally, the paper discusses the other two important choices when growing a model, namely when to perform these operations and how much to grow the model. Inspired by scaling laws, the paper fits curves on the "optimum" iso-FLOPs points derived by growing at different points and by different amounts.
Overall the paper is very comprehensive in its experiments both in the main text and the appendix.
Weaknesses: The motivation behind the three objectives is clear, yet at times insufficient.
O1. Reported results in the literature are indeed focusing mostly on BERT pretraining. Although the evaluation framework proposed here is complete and comprehensive, it is not clear why decoder-transformer language modeling is fundamentally different and model growing results will in this case be different compared to BERT pretraining (apart from post-LN in the architecture). The authors should compare their findings with established results on BERT pretraining in the literature and discuss how their best configurations compare to them [1]. Note that other papers ([2, 19 from your pdf]) are dealing with growing operations during language model pretraining.
O2. Viability for scaling is important. One of the issues of model growth not discussed sufficiently in this paper is the problem of "diminishing returns", meaning that one can save compute, but the ratio of compute saved becomes smaller as training progresses (this is hinted in Figure 6a), especially since you are performing a single growing step.
Section 4.2 is very interesting and the experiments enlightening, but the analysis is superficial. [2] has a more detailed analysis to determine growing timings. In general, assuming that models are training following a power law loss, growing timings can be determined based on the gradient of the training losses. This was analyzed in detail in [3], where they describe in detail how to determine these timings. Your case is admittedly more complicated since you are growing in a non-functional-preserving manner, but fundamental take-away messages should be the same.
Section 5, is interesting, but both of the results seem to disagree with multiple findings in the literature. Although additional insights are presented in the appendix, these results seem preliminary and should be taken with a grain of salt.
[1] Wang, Peihao, et al. "Learning to grow pretrained models for efficient transformer training." arXiv preprint arXiv:2303.00980 (2023).
[2] Shen, Sheng, et al. "Staged training for transformer language models." International Conference on Machine Learning. PMLR, 2022.
[3] Anagnostidis, Sotiris, et al. "Navigating Scaling Laws: Compute Optimality in Adaptive Model Training." Forty-first International Conference on Machine Learning.
Technical Quality: 3
Clarity: 4
Questions for Authors: Some additional comments:
- Can you comment with more detail on why you think stacking multiple-times does/should not work?
- How specific are the results to Transformers and language models?
- Since your growing operator is not function-preserving, should there be spikes in the loss curves? If yes why are these not visible in the current figures?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable comments! Due to the word limit, we have omitted some of your partial questions and here are our pointwise responses:
1. _O1. Reported results in ..._
Thank you for your comments regarding the connections to existing work. The aim of this study is to systematically explore scaling model growth for LLM pretraining. We began by reimplementing various methods, such as bert2bert, stackedBert, stageTraining, and MSG, and attempted to scale them for LLM training. However, after several unsuccessful attempts, we recognized that pre-training billion-scale LLMs presents challenges that differ significantly from those settings.
One clear indication of this is that most studies focus on growing neural models in both directions, while our findings in Figure 3 suggest that widthwise growth is much more challenging than depthwise growth, which aligns with our difficulties in replicating their success in LLM settings. Another crucial insight from our work is that growth timing should be relatively early (for example, starting with a 10B base model before training the larger model for another 300B), whereas existing studies typically grow at a relatively later stage. (e.g. stackedBert train 3L 50k steps, stack and train 6L 70k steps and then train 12L 280k.) Additionally, the significant differences in scaling present challenges when directly applying their configurations to LLM pretraining, as they typically train on a smaller scale using models like GPT-2 and BERT, with only a small fraction of data of ours.
Hence, given the configuration challenges and the high costs of LLM pretraining, we decided to summarize existing approaches and design four fundamental growth operators to test under our configurations. Please note that our operators are closely related to existing methods, and we provide a detailed discussion in Appendix A.2, addressing stageTraining, LiGO, MSG, and others. Thank you for highlighting this; we will consider including our preliminary results for existing approaches under their own configurations.
2. _O2. Viability for scaling ..._
We agree that diminishing returns are a crucial factor for any efficient pretraining algorithm. Therefore, we conducted a thorough study to examine scaling in O2. In Figure 6a, we observe a 31% speedup for a 410M LLM after processing 700B tokens, which is approximately 90 times the training tokens suggested by Chinchilla for a 410M LLM. This suggests that diminishing returns are not a significant issue in stacking. Additionally, in our pretraining experiment with the 410M LLM, we did not select the optimal growth timing for the model size because we have not reached the optimal growth timing at that time. So there is a chance that finding the optimal growth timing for the 410M LLM could lead to even greater speedup performance.
3. _ Section 4.2 is very ..._
Thank you for highlighting the importance of using gradients as an indicator for finding optimal growth timing! We agree that it’s a promising direction. In fact, we are currently working on follow-up research to understand the reasons behind the success of stacking. The gradient indicator will also be valuable. We appreciate your suggestion and will consider it for determining growth timings!
4. _Section 5, is interesting,..._
For multiple-times stacking, we will elaborate on point 5. Regarding function preserving (FP), as mentioned in L401 of the paper, we believe it is an important factor, but perhaps not the sole key to the success of model growth methods. For example, LiGO is not fully function preserving, yet it has become one of the popular methods in model growth. We recognize the importance of FP; even our stacking method is not entirely function preserving, as noise exceeding 20% results in a notable performance drop.
The goal of function preserving is to maintain the knowledge learned by the base model to accelerate training. However, the parameter space explored by the base model may not be optimal for a larger model. Thus, focusing solely on function preserving might not align with the goal of speeding up training. This could also explain why we need to grow earlier; a well-trained base model might confine the larger model to a suboptimal state. Nonetheless, this is just our intuitive explanation, and we acknowledge that this could also relate to the gradient indicator you mentioned. We are currently working on a followup work to give more formal proof of the success of stacking. Thank you for bringing this up and allowing me to discuss it!
5. _Can you comment with ... _
We agree that the ablation study on multi-growth is preliminary. However, we must acknowledge that multiple-time growth suggests a more complex training dynamic overall. Considering the high costs of LLM pretraining and the focus of our work on "atomic" growth operators, we have only reported our preliminary results on multiple stacking.
6. _How specific are the results..._
The primary motivation of this work is to "scale" model growth techniques, which is why we chose Transformer-based LLMs. However, our recent experiments with SSM-based LLMs, as mentioned in the general rebuttal response, suggest that this method may also be effective across different Transformer architectures. We will consider investigating other models, such as ViT, for further research.
7. _Since your growing ..._
Thank you for your thorough review! Yes, we observed a spike in the loss curves for stacking, and we included the figure in the PDF of general response. Specifically, we compared the baseline, stack, and random methods. It’s evident that while the function preserving operator, random, initially achieves a lower loss right after growth, it is soon surpassed by the stack operator.
In the script, we excluded the base model curve for simplicy. We will include add the base curve figure to Appendix in the revised version. Thanks for your detailed review!
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional details, these are insightful. I would just be careful about over-claiming things regarding diminishing returns, especially since people nowadays train a lot past the Chincilla optimum.
Overall, the paper is a very good reflection and presentation of stacking approaches in the context of language modelling. I have increased my score. | Summary: The paper introduces a novel method for pre-training large language models (LLMs) efficiently using model growth techniques. The authors tackle three main obstacles: lack of comprehensive evaluation, untested scalability, and absence of empirical guidelines. They propose a depth-wise stacking operator, "Gstack," showing its effectiveness in reducing training time and computational resources across various LLM sizes, verified through extensive experiments and evaluations on multiple NLP benchmarks.
Strengths: The concept of model growth isn't new and stacking the weight depthwisely is not novel, but the paper innovatively applies it to LLMs through a systematic method, introducing the "Gstack" operator for depth-wise expansion. The paper is grounded in rigorous experimentation, presenting reproducible results with publicly available code and detailed experimental setups, ensuring high quality and reliability.
Weaknesses: Although extensive, the experiments mainly focus on model performance from a computational and speed perspective. The impact on final model accuracy or downstream task performance is less explored, which could be crucial for practical applications.
Technical Quality: 4
Clarity: 3
Questions for Authors: Will this stacking method influence the robustness or generalizability of the LLMs on downstream tasks?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper lacks theoretical insights that could explain why the Gstack operator works well in practice. Including a theoretical analysis or rationale could strengthen the paper and provide a deeper understanding of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work! Here are our pointwise responses:
1. _Although extensive, the experiments mainly focus on model performance from a computational and speed perspective. The impact on final model accuracy or downstream task performance is less explored, which could be crucial for practical applications._
Thank you for your question. We acknowledge that evaluating only the loss is insufficient for fully assessing the work. But due to page limit, we have moved most of the evaluation details to the appendix, such as Appendix C (O1), D (O2), G (O3) and H (How to stack).
In terms of the final model accuracy and downstream task performance, we have evaluated using both 0-shot and SFT across different benchmarks, as shown in the Appendix D.4.
We will consider adding additional application settings. Thank you for highlighting this!
2. _Will this stacking method influence the robustness or generalizability of the LLMs on downstream tasks?_
Yes, we agree that investigating the generalizability of our LLM stacking approach is also crucial for its wide-scale adoption. Based on our current downstream experiment results, it seems the generalizability may not be significantly different compared to vanilla LLMs. However, we acknowledge that using out-of-domain datasets is necessary to properly benchmark the robustness and generalizability of the LLMs on downstream tasks. We plan to expand our evaluation in this direction in future work.
3. _The paper lacks theoretical insights that could explain why the Gstack operator works well in practice. Including a theoretical analysis or rationale could strengthen the paper and provide a deeper understanding of the method._
Yes! We're also intrigued by this and plan to pursue follow-up work to explore the reasons behind this. Thank you for your suggestions!
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors response, which addressed all my questions. I am keeping my score. | Summary: This paper studies the model growth technique for large language models, which expand a smaller pretrained large language model into a bigger one. The authors consider four natural way of expanding the parameters of the smaller pretrained large language models, and find that duplicating layers is the most effective technique. Therefore, this work delve deep into the dupicating/stacking layers technique, including the following aspects:
- Scaling model sizes
- Scaling training tokens
- estimating laws
- determining the growth timing
- determining the growth factor
and a bunch of other ablation studies.
Strengths: - Except in the introduction, the “model growth” concept is not introduced early enough, the paper is in general well-written and easy to follow.
- The conducted experiments are very comprehensive and well-designed.
- The problem is well-motivated and useful.
- The proposed solution is very natural, simple yet effective.
- In general, this paper provides very insightful observations for future usages.
Weaknesses: - [Minor] The experiments are conducted on Transformer architecture. Trying different model architecture such as SSM can be more interesting.
- [Minor] The concept “model growth” is not introduced clearly until the related works.
Technical Quality: 4
Clarity: 4
Questions for Authors: See weakness.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper is mostly about empirical observation. However, the reason behind this phenomenon is still unclear and not much discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thnak you for your appreciation and positive feedback on our work! Here are our point-by-point responses:
1. _[Minor] The experiments are conducted on Transformer architecture. Trying different model architecture such as SSM can be more interesting._
Yes, we incorporated an SSM-based LLM experiment during the rebuttal period. Please see our general response for more details.
2. _[Minor] The concept “model growth” is not introduced clearly until the related works._
Yes, in order to provide a clear introduction to the three key obstacles, we had to significantly reduce the content about model growth in the introduction to stay within the page limits.
Thank you for the suggestion; we will expand the current paragraph from L30 to L39 to give a more detailed introduction to model growth, particularly highlighting the relationship between the existing literature and our design of the four atomic growth operators. | Summary: The presented work systematically investigated the major obstacles of applying model growth methods to large language models and the corresponding solution. The empirical results reveals that depthwise stacking methods works the best to LLMs. The paper then studied how to practically use the depthwise stacking methods in detail.
Strengths: - The experiments are comprehensive and convincing.
- This paper provides detailed and clear empirical guidelines.
- The experiment results shows that the proposed model growth practice does accelerate the training of LLM by a lot.
Weaknesses: - This paper provides practical guidelines on the usage of model growth methods on LLM training, but doesn't provide any theoretical analysis or intuition on why certain methods work or not work.
- Current study focuses on to making use of an existing model growth method under the scenario of LLM training, instead of improve it or modify it. It is possible that there exists a better model growth method for LLM that doesn't fit into the 4 catagories summerized by this paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your overall positive feedback and recommendation on our work!
Regarding the two weaknesses you mentioned, we acknowledge that the current draft lacks a thorough theoretical analysis of the depthwise stacking and a comparison to more sophisticated growth methods. This is because our primary goal was to highlight three obstacles in model growth methods for LLM pretraining.
We are also very intrigued in conducting a deeper analysis, especially regarding the reasons behind the success of the depthwise stack, and we are actively working on this.
Thank you again for your thorough review! We will continue to build upon this work and address the points you have highlighted.
---
Rebuttal 2:
Comment: I'm satisfied with the authors' response and have decided to keep my rating as accept. | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers for their time and effort in reviewing our paper. We are thrilled to receive positive feedback from all four reviewers and are honored that the reviewers generally acknowledge our strengths, including: 1. the paper is well-motivated and easy to follow [FXhk,ox3X], 2. the experiments are well-designed, systematic, and rigorous [uHvr], 3. the findings are comprehensive [8DC,FXhki], convincing [8DCi,ox3X], and clear [8DCi].
We thank the useful suggestions from the reviewers, which help a lot in further improvement of this paper. In addition to the pointwise responses below, main revisions are summarized as follows:
1.__Ablation on state space modeling (Mamba)__
In response to the requests from reviewers FXhk and ox3X for additional ablations using other methods rather than transformer architecture, we have carried out the following ablation study, as detailed in Figure 1 and Table 1 of the attached PDF.
We utilize the codebase from [GitHub - microsoft/Samba](https://github.com/microsoft/Samba), which implements a hybrid State Space Model using the Slimpajama dataset for LM. In this experiment, we follow the guidelines outlined in the paper to guide our stacking process. With a parameter size of 410M and training on 100B tokens, we set the growth timing to 8B and the growth factor to 3. We opted for 3 instead of 4 because Samba is an interleaving of Mamba and self-attention layers. Since the target model has 12 layers, we can only stack even layers, leading us to select a 4-layer base model (Mamba-SA-Mamba-SA).
Our experiments results on loss curves (Figure 1) and downstream tasks (Table 1) indicate stacking also works beyond Transformer-based LLMs. Please note that in Table 1, we select stack with 47B rather than 50B to count the additional consumption required to train the base model on 8B tokens.
2.__Loss curves with base model__ Figure 2 in the PDF addresses reviewer ox3X's request to illustrate the loss spikes that occur right after stacking.
Please contact us if we can do something else to help you better understand and recommend our paper.
Pdf: /pdf/626390b36d330fe2dc653c2a82e38ba199ae6346.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Policy Improvement using Language Feedback Models | Accept (poster) | Summary: The authors introduce Language Feedback Models (LFM), a method filtering out desirable transitions (i.e. transitions considered as helping to solve a task) collected by an agent in an environment to then improve this agent by doing Imitation Learning on these transitions. The LFM method has three phases: 1) First, a fairly large number of transitions are collected by the initial policy, and GPT4 is used to determine which transitions are desirable. Then, 2) given this dataset of desirable transitions, a smaller LLM is trained to reproduce GPT4 in saying these transitions are desirable. Finally, 3) the policy is used to collect new transitions, which are then filtered out by the smaller LLM, taking only the ones considered as desirable, and the policy maximizes the likelihood of the actions chosen in these transitions.
Experiments in two TextWorlds and one visual navigation environment are performed, showing that LFM successfully improves the initial policies (learned with Behavioral Cloning from expert demonstrations) and leads to better results than using GPT4 to directly propose the best action to perform and imitate it. Additionally, experiments show that the learned feedback model can be used to improve the policy in unseen test tasks which further increases the policy's performance even though the feedback model has not been trained on the test tasks. The authors also show that one can go further and not only train the smaller LLM to select desirable transitions but also to explain why these transitions are desirable (also by imitating GPT4), leading to better explainability.
Strengths: The method relies on distilling the feedback ability from GPT4 to a much smaller LLM (Flan-T5 770M) to reduce the cost of asking for feedback at every step. The authors show that they successfully transferred this ability and that the obtained feedback model generalizes to unseen tasks. The experiments also show that their method is not only less compute-effective but also much more efficient than directly imitating GPT4 used as the policy.
The authors also show the method can be trained to provide explanations of the feedback given, leading to better explainability of the method.
Weaknesses: One of the main weaknesses I see is the lack of baselines to properly assess the efficiency of the method. The LFM method acts like a rejection sampling algorithm where only transitions considered desirable by the feedback model are kept to fine-tune the policy. Similarly, If all environments provide a reward if would have been interesting to have a baseline relying on this information for the rejection sampling. If the reward is sparse, one could wait for the end of the episode and only keep desirable trajectories (e.g. whose episodic reward is above some threshold or if the goal has been reached).
The authors also say "In this work, we consider long-horizon settings with only sparse and delayed task-completion rewards. Consequently, we focus on imitation learning from demonstrations as opposed to reinforcement learning from rewards." It could also be interesting to see how RL performs with the same amount of collected data.
Finally, it seems the authors use a dataset of expert demonstrations to first train the initial policy with BC before applying their method and the compared baselines. However, they do not provide any insights on how important this preliminary phase is. Indeed, on key advantage of LFM is that it relies on BC while not requiring an expert policy. However, results from Table 3 show that LFM mostly seems to substantially improve the initial policy learned with BC from expert demonstrations.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In Table 3, BC results are different from "Prev SOTA" on ALFWorld and Touchdown. Why is this the case, and how did the authors collect expert demonstrations that led to a policy better than SOTA?
- When describing the ACTPRED baseline l237, it is mentioned "First, we execute k steps of the base policy, then query the LLM for the next action". What are these k first steps and why are they necessary?
- The authors computed the perplexity averaged over tokens of each possible action with the policy (that is an LLM) and selected the action with the minimum perplexity. Why did the authors not use the (log) probability of the sequence of tokens as in SayCan (Ahn et. al, 2022), GLAM (Carta et. al 2023) or TWOSOME (Tan et. al 2024)?
- Also, computing the perplexity (or probability) of each action to follow the prompt can be very extensive when the action space grows, given that the policy is still a fairly large model (770M). Did the authors use anything to make this fast enough, especially in ScienceWorld where the set of possible actions can be very large?
- In LFM, an initial phase uses the policy to collect trajectories that are then given to GPT4 in order to obtain the dataset to finetune the feedback model. In the experiments, 10k trajectories of 20 steps have been collected if I understood well. Were these trajectories also appended to the dataset used to improve the policy later on?
- The authors showed an example of an explanation provided by the feedback model (based on Flan-T5 770M) in Table 5. Given the limited abilities in text generation of Flan-T5 770M, did the authors perform a deeper study of the explanations provided by the model, especially on the unseen test tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss the limitations of the introduced method in the appendices. They identify that, while LFM improves compute efficiency by distilling feedback abilities from GPT4 into a smaller LLM, Flan-T5 770M is still a fairly large model to be called at every step, both for the policy and the feedback.
Also, Appendix G shows that using Llama 2 70B instead of GPT4 can significantly impact the results. This is not a strong limitation, in my opinion, given that GPT4 is only used in the initial phase to train the smaller LLM.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our paper and provide us with valuable insights. We appreciate the acknowledgement of the generalizability and efficiency of our method, as well as improvements to model explainability.
## W1: Rejection sampling
We thank the reviewer for drawing connections between our method and rejection sampling (RS). We discuss 2 key differences between the two techniques.
First, LFM removes 50-80% of non-productive steps whereas RS includes entire trajectories (20-100 steps for our envs). That said, we experimented with training on entire rollouts w/ positive reward. This did not significantly improve over the base policy.
Second, LFM filtering can be done during testing, where no reward is present. In this case, reward-based RS cannot be directly applied to perform adaptation during testing. That said, one could train a model to predict rejection, using data from the training environments, then use said model on test environments for adaptation. This is precisely what our proposed method does.
## W2: reinforcement learning
We reference prior RL work on ALFWorld and Touchdown (https://arxiv.org/abs/2110.10661), which shows RL underperforms LFM after training for 10 million steps. ALFWorld episodes typically have <30 steps. Training on demos amounts to 30 * 3.5k ~ 100k steps. We train LFM using 1 rollout per training env for another 100k steps. Touchdown episodes are typically <200 steps. Demos and LFM steps are then 1.5 million steps each. For both envs, LFM requires fewer steps than RL and achieve higher task success rate (e.g. 64 vs 23 ALFWorld, 60 vs 15 Touchdown).
## W3: Importance of initial demonstrations
We experimented w/ a random base policy, but found collecting trajectories that demonstrate productive behaviour difficult, as random policies are mostly not productive. Consequently, most data given to the LLM for feedback annotation do not exhibit productive behaviour, therefore increasing the cost of LLM usage. We can alleviate this using reward-based RS, as the reviewer suggested in W1. That is, we sample many trajectories, then give those w/ non-trivial reward to the LLM for feedback annotation. We will explore this in future work.
## Q1: BC vs Prev SOTA
We use the same demos as prior SOTA, provided by the env designers. However, we use stronger, newer base models. For instance, ALFWorld prior SOTA fine-tuned a GPT2 model. Touchdown prior SOTA trained a CNN/RNN network. In contrast, we fine-tuned a FLAN T5 Large model, which performs well in language grounding tasks (https://arxiv.org/abs/2210.11416). We perform similarly to prior SOTA for ScienceWorld using the same model but without ScienceWorld-specific prompts (e.g. tracking object types, rooms visited).
## Q2: ActPred k steps
The significance of this k-step is due to the token limitation. We want diverse trajectories over many envs. However, each trajectory may consist of >100 steps (ScienceWorld and Touchdown are often 50-200 steps). Predicting actions from the start using LLMs is biased towards early-trajectory steps - we will exceed the token limitation before encountering late-trajectory steps. Completely rolling out one env at a time will exceed token limitation before processing many of the envs. To balance between coverage of envs and trajectory depth, we first roll out k steps, then predict action using LLM. k is sampled from the max trajectory length of demos.
Note that this coverage problem is less significant for LFMs. Because LFM data is queried in windows of 20-steps, we can maintain high coverage of both environments and of trajectory depth. We also experimented with asking GPT4 to retroactively relabel actions in 20-step windows, however the labels were significantly worse than labeling actions one step at a time.
We will add explanations of the k-step significance to the manuscript.
## Q3: average perplexity vs logprob of sequence
We did explore with log probability, which performed slightly worse than averaged perplexity.
## Q4: Scoring large action spaces
We emphasize that training strictly maximizes log probability of tokens of the correct action, so its complexity is independent of the size of the action space. Inference requires scoring large action sets, however is cheap due to lack of gradient-tracking. The only optimization we did was compute scores in chunks, which are concatenated to select the optimal action.
## Q5: initial phase trajectories
We clarify that it is 10k windows of 20 steps sampled from trajectories. These windows contain many negative examples, consequently we do not add them to the dataset for policy improvement. The data used for policy improvement are only steps from base policy rollouts that identified by the trained LFM as productive.
## Q6: analysis of explanations
The examples in Table 5 are actually from the held-out test set tasks. Unfortunately, we did not perform quantitative analyses. Our qualitative findings are that a 770M model, trained on LLM feedback for a specific domain, is able to provide accurate feedback and explanations for said domain. One limitation was that LFMs are sometimes not consistent in evaluating productiveness of ambiguous actions such as exploration. For instance, if the instruction is to find cups and wash them, then even a good policy will spend time searching the room for cups. To some observers, this search procedure is productive, to others, it is not productive. In our experience, LLMs and LFMs are sometimes not consistent in assigning productiveness to such actions. In future work, we would like to analyze the limits of what types of feedback LLMs (and VLMs, for that matter) are able to provide in multi-modal grounded environments.
## Summary
We sincerely thank the reviewer for taking their time to help us improve this work. We hope we have addressed the reviewer’s concerns and questions. If so, would the reviewer please consider increasing their score to show support for our work?
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: The authors properly answered all my questions and concerns.
I will keep my score as it is. | Summary: This paper provides a policy improvement method with a Language Feedback Model (LFM) for decision making tasks.
The proposed method mainly consists of two stages: (1) training a Language Feedback Model (LFM) and (2) improving a policy model with the trained LFM. In the first stage, to train a LFM, the initial policy is used to generate rollouts by interacting with the environment. Then, a LLM is used to generate text feedback on each action in the rollouts. Then, a LFM is trained on this feedback dataset. In the second stage, the trained LFM is used to generate feedback on each action generated by the initial policy. Then, actions that are predicted as desirable from the LFM are collected as a dataset. Finally, the policy is trained on the dataset that contains desirable actions.
This paper evaluates the proposed method on three decision making benchmarks: Touchdown, ScienceWorld, and ALFWorld. This paper empirically demonstrates that the proposed method provides better scores than baselines such as BC or ActPred.
Strengths: S1. The idea of learning a language feedback model (LFM) to provide text feedback on each action sampled from the initial policy is interesting.
S2. This paper provides experimental results on three representative benchmarks: Touchdown, SecienceWorld, and ALFWorld.
Weaknesses: W1. In the introduction section, this paper mentions that sample-efficiency and generalizability are important in instruction-following agents. I am not sure that the policy improvement with LFM is sample-efficient and generalizable. The proposed method trains the initial policy on a rollout dataset where desirable actions are collected by the LFM. Training on many rollouts does not seem sample-efficient. Also, the policy trained on a specific environment may lose generalizability.
W2. In Section 3.2 (i.e., Learning a feedback model), it is rather unclear how to collect desirable behavior D_k.
W3. It would be better to provide ablation study that shows effectiveness of the proposed method. For example, the authors may compare the policy improvement with an expert LLM to the policy improvement with the LFM.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. Regarding the weakness W1, how many rollouts are needed to train a LFM? Also, how many revised rollouts are need to train an initial policy?
Q2. Regarding the weakness W2, if we only select desirable actions from a rollout, does the revised trajectory correctly reflect or follow the environment dynamics or the transition function?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors provide limitations of their work in Appendix A (i.e., Limitations).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our paper and provide us with valuable insights. We appreciate the acknowledgement of the novelty of our method, as well as clear demonstrations of improvements using our method across three representative benchmarks.
## W1: sample-efficiency and generalizability
We define sample-efficiency to mean that the agent requires few human labeled demonstrations in order to achieve high performance. In this work, we assume access to a small set of human labeled demonstrations to train a base policy. With no further human labeled demonstrations, we can synthesize demonstrations using the LFM automatically, with which we can improve policy performance. Consequently, we describe the proposed method as sample-efficient because it improves policy performance with no additional human labeled demonstrations.
We define generalizability to mean that the agent generalizes to new environments not seen during training. Let’s consider the game NetHack. In NetHack, there is one implicit goal for the agent, which is to obtain the highest score it can, given the rules of NetHack. In other words, every NetHack agent is given the implicit instruction “get the highest score you can in NetHack, subject to the rules of the game”. In contrast, all three environments we investigate have different instructions between training and evaluation. These environments require generalization to new scenes (like NetHack) as well as to new instructions (unlike NetHack). For ALFWorld, the agent may be trained to “find and wash glasses” and “put apples in the fridge”, but during test evaluation it may be asked to “find apples, wash them, then put them on the dining table” in new rooms. Similarly, in ScienceWorld the agent is required to follow new compositional instructions it has not seen before during training, in new spaces (e.g. the determine the boiling temperature of a new substance). In Touchdown, the agent is required to navigate between new starting and end points in new neighborhoods. Consequently, we describe the proposed method as one that generalizes because it improves performance on new environments with new scenes and new instructions not seen during training.
We will clarify these two points by precisely defining sample-efficiency and generalization in the manuscript. We thank the reviewer for identifying this point.
## W2: desirable behaviour
We want to emphasize that the proposed framework consists of two components. The first component involves learning a language feedback model (LFM). The second component involves using this LFM to improve policy. The D_k the reviewer refers to has to do with the second (improving policy), not the first (learning a feedback model). We will summarize both components here.
We start from a base policy trained from a small set of human demonstrations. In the first component of learning a LFM, we roll out the base policy to collect trajectories. These trajectories are given to an LLM to create a dataset (this is not D_k) to train the LFM. This dataset consists of tuples of (agent behaviour, LLM feedback). We train a smaller LFM to imitate LLM feedback. That is, given the agent behaviour, the LFM critiques in natural language whether this behaviour is productive. We then freeze this LFM.
In the second component, we use the trained LFM to improve the base policy. First, we roll out the base policy to collect trajectories. We then use the LFM to predict whether each step in the trajectory is productive. We take the subset of productive behaviour (eg. context and agent action) and add it to our collection of productive demonstrations. Then we update the policy by training on this collection of demonstrations. We do this in multiple rounds, in each round K, we roll out the policy, identify good behaviour, and add it to the collection of demonstrations. We refer to the collection of demonstrations during round k as D_k.
## W3: compare expert LLM to policy improvement with LFM
We do perform two such experiments in our ablation study. Experiment A compares to using GPT4 (0315) on-policy zeroshot. Table 3 shows that our method (and other methods that train a smaller policy model) achieves significant improvements over zero-shot GPT4. Experiment B uses GPT4 to label what the agent should do, as opposed to critique how the agent performed. We then train a policy using demonstrations as well as GPT-4 labeled actions. This method of using LLM as expert policy to label actions (ActPred), underperforms using LLMs to train feedback models (LFM).
## Q1: number of rollouts
Dataset statistics are shown in Appendix C Table 6. To train the initial policy, we use 3.5k demos for ALFWorld, 3.6k for ScienceWorld, and 6.5k for Touchdown. These correspond to 1 demo for each unique environment. When rolling out, we perform one rollout per unique environment. Consequently we have an identical number of rollouts as initial demos. That is, we perform 3.5k rollouts for ALFWorld, 3.6k for ScienceWorld, and 6.5k for Touchdown.
## Q2: desirable actions vs. transition function
The revised trajectories correctly reflect env dynamics because they are steps that took place during the rollout, according to env dynamics. This observation also explains underperformance of ActPred. In ActPred, an LLM is used to label what actions the agent should take. However, the LLM can hallucinate actions, resulting in demonstrations that do not correctly reflect env dynamics. In contrast, our proposed method only identifies real actions that took place as productive behaviour. Consequently all demos collected using LFMs correctly reflect environment dynamics. This also explains why LFM improvement is larger than ActPred.
## Summary
We sincerely thank the reviewer for taking their time to help us improve this work. We hope we have addressed the reviewer’s concerns and questions. If so, would the reviewer please consider increasing their score to show support for our work?
---
Rebuttal 2:
Comment: Dear Reviewer,
We wanted to send a friendly reminder that we are awaiting your response. With the deadline for the reviewer-author discussion approaching on August 13, we would greatly appreciate it if you could provide feedback at your earliest convenience. We hope we have addressed your concerns and questions. If so, would you please consider increasing your score to show support for our work?
Sincerely,
Authors
---
Rebuttal 3:
Title: After the Author Response
Comment: Thank you for providing thoughtful responses to my comments. For now, I maintain my initial rating. However, I am open to AC's decision on this paper.
---
Rebuttal Comment 3.1:
Comment: Thank you for your acknowledgement! Is there anything unsatisfactory about our response, such that the you would not consider increasing your score? Specifically, 1) we provided clarifications regarding sample-efficiency and generalizability, 2) we elaborated on how desirable behaviour is collected, and 3) we noted that the existing manuscript does contain comparing to expert LLMs as the reviewer requested. Is there anything else the you would like to discuss in order for you to support our work? | Summary: The papers present a method to essentially filter which actions should be used to learn a policy via imitation learning. The method follows the online imitation learning setting and replaces the expert policy with a language feedback model (LFM) distilled from a LLM. The LFM evaluates which transitions from the policy's rollouts should be used to train the policy. A LFM is used instead of LLM to reduce the computational complexity of the task, and both consumes and produces text. The method is evaluated on several benchmarks. The method is compared against several ablations over the design decisions.
Strengths: - The paper proposes an interesting idea to filter out the data that should be used for imitation learning, and then proposes to do this with a language model.
- The method demonstrates clear improvements to methods evaluated against.
Weaknesses: - The authors claim that their LFM method is better than using a LLM as an expert policy, because it scan provide human interpretable feedback. However, the authors do not provide any results to suggest that the LFM can produce outputs humans can do something with.
- The only comparisons are to ablations of the proposes method. At a minimum, some of the mentioned related work (like MOTIF (Klissarov et al [22])) should probably be a baseline.
- The LMFA 1 rnds and LMFA 2 rnds in Table 3 do not seem to be discussed in the main body.
- There are some gaps writing that have left me with a lot of questions/uncertainties (see below).
- Small things:
- you have places in the PDF with weird formatting, e.g. lines 156 - 157.
- The reference to Figure 1(b) in Section 2 line 72 should probably be 1(c)
Technical Quality: 3
Clarity: 2
Questions for Authors: - It is unclear how the method is marking states as desirable in section "learning from language feedback" is different from that in "naively learning from LLM feedback". Is there a connection missing that links how the LFM is "efficiently" learned and how the policy learns? Instead of querying the LLM at each step, the LLM is queried of a trajectory of steps and responds by saying which steps were desirable?
- Is there also a reward function?
- What is an iteration/round in this method? i.e. "In round k, we rollout the base policy...." (Section "Learning from language feedback")
- You state, "Unlike these works ... we consider settings where training and evaluation goals are different." It is not clear to me how the training and evaluation goals differ in your set up.
- How do you decide when to summarize what has happened with a "Before" in the LLM prompt? e.g. Table 2 - LFM prompt.
- What does it mean that you "...limit the amount of LLM usage to 100k GPT-2 tokens"? Did you constrain the number of input tokens? The number of output tokens? How did you go about the constraints?
- How did you select 20 as the number of timesteps over which LLM feedback was collected?
- You say you subselect feedback data to have an even split of productive and non-productive action. How much data do you actually end up with? What were the originally ratios? How diverse are the samples?
- For the results section on generalization to new environments, what makes a new environment new? Table 4 holds results for ALFWord, ScienceWorld, and Touchdown, which are the environments you report training on.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: There are no limitations in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our paper and provide us with valuable insights. We appreciate the acknowledgement of the novelty of our method, as well as clear demonstrations of improvements using our method.
## W1: interpretable feedback
To clarify, we have 2 claims. First, policy improvement from LLMs feedback is better than using LLM as policy. There are two experiments that support this. a) we compare to using GPT4 (0315) on-policy zeroshot. Table 3 shows that our method achieves significant improvement. b) we use GPT4 to label what the agent should do, then train a policy using demos as well as GPT-4 labeled actions. This method of using LLM as expert policy to label actions (ActPred), underperforms using LLMs to train feedback models (LFM).
The interpretable feedback experiments show that instead of producing LFMs that only identify good behaviour, we can train descriptive LFM-Ds that also state why a behaviour is good. We show that LFM-Ds perform similarly to LFMs, achieving both interpretability and high levels of policy improvement.
## W2: related work
Thank you, we will add this. LFM differs from MOTIF in that the former improves policies via imitation learning while the latter derives a reward model for RL. While MOTIF shows results on a single game NetHack, we show results across three different settings in household (ALFWorld), scientific experiments (ScienceWorld), and real-scene navigation (Touchdown). In the latter two settings, RL is substantially worse than LFMs (https://arxiv.org/abs/2110.10661). In the future, we will investigate RL using language feedback for multimodal grounded settings.
## W3: adaptation rounds
Thank you for the comment. We describe one-round adaptation in section 5.2 on line 249. We will describe two-round adaptation in the manuscript. Round-wise adaptation is described on line 148: we use the policy from the previous round to perform rollouts, then filter for good behaviour using the trained LFM, then imitate them to improve the policy.
## W4: small things
Thank you, we will make the corresponding corrections to the manuscript.
## Limitations
We discuss limitations and broader impacts of this method in Appendix Section A and B, we will move them before references.
## Q1: efficient vs. naive feedback learning
Let’s suppose we have 20 steps. The "naive" method queries the LLM each step, resulting in 20 queries. The kth query has steps 1…k-1 in the context and asks the LLM whether the kth action is productive. The "efficient" method batches feedback requests into 1 query, which asks the LLM to list which steps were productive (Figure 2). The efficient method is much cheaper, requiring 20x fewer API calls to the LLM (line 130). Once feedback is collected, the subsequent training would be identical.
## Q2: reward function
We do not train a reward function - we only use imitation learning for policy improvement, not reinforcement learning.
## Q3: iteration/rounds
After training a LFM, we can improve the policy in rounds. In round 1, we start with the base policy P1 trained on initial demos. We roll out P1, and then identify its good behaviour using our LFM. We then add the good behaviours into the demo set and train the policy P2. We then repeat this in round 2, where we identify good behaviours using P2, then use those to train P3, and so on.
## Q4: training vs evaluation goals
Let’s consider NetHack. NetHack has 1 implicit goal for the agent, which is to obtain the highest score it can. In contrast, we consider settings w/ different instructions between train and test, which require generalization to new scenes (like NetHack) and new instructions (unlike NetHack). For ALFWorld, the agent may be trained to “find and wash glasses” and “put apples in the fridge”, but during test it may be asked to “find apples, wash them, then put them on the dining table” in new rooms. Similarly, in ScienceWorld the agent is required to follow new instructions in new spaces (e.g. the determine the boiling temperature of a new substance). In Touchdown, the agent is required to navigate between new starting and end points in new neighborhoods.
## Q5: summarize with “Before”
In Table 2, the “Before” is not a summary, it is the observation of the step right before the window starts. In this case step 20. We will clarify this in the manuscript.
## Q6: Token limitation for LLM usage
As described in Table 3, we limit LLM interactions to 100k output tokens. We do not limit input tokens as they are much cheaper than output tokens. We collect feedback for as many windows as possible until we run out of 100k output tokens, then we use this feedback to train the LFM. For ActPred, we label actions for as many steps as possible, until we run out of 100k output tokens, then we use this annotated set along with demos to train the policy. We will clarify this in the manuscript. Thank you!
## Q7: 20 steps
Empirically, 20 steps fits over 90% of observation windows into the model’s context length (8k for GPT4 0315). We will investigate using new LLMs w/ longer context (128k) to train LFMs.
## Q8: Data ratios and sample diversity
This ratio differs between settings (60% not productive for ALFWorld, 70% ScienceWorld, 90% Touchdown). All settings use 10k 20-step windows as feedback data.
We have diversity across tasks, instructions, and envs. LFM data collection is biased by how good the base policy is. If the base policy is bad at task A and good at B, then we tend to identify more “good behaviour” from trajectories from B. In this work, we do not do sophisticated filtering to rebalance on a task level, but we are interested in exploring this in the future.
## Q9: new environments
Please see Q4.
## Summary
We sincerely thank the reviewer for taking their time to help us improve this work. We hope we have addressed the reviewer’s concerns and questions. If so, would the reviewer please consider increasing their score to show support for our work?
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. You have addressed my questions, and I will raise my score. | Summary: The paper proposes to train a language feedback model and leverage the language feedback model to conduct policy improvement in language-based task. The authors also propose a pipeline that can apply CLIP to convert images into textual description. Experiments over ALFWorld, ScienceWorld and Touchdown validate the effectiveness of proposed algorithms.
Strengths: 1, The paper is overall clear and easy to follow.
2. The idea of training a separate language model to serve as language critic function for policy learning is overall novel in decision-making task.
3. The experiments cover a large range of tasks, including one visual task -- which is great to demonstrate the generality of the proposed algorithms.
Weaknesses: 1. The improvement is mainly from distilling language feedback from stronger models (such as GPT-4), which somehow limits the technical contribution of proposed algorithm. Is it possible to derive language feedback from exactly the same model? (FLAN-770M might be impossible, but what about larger one like llama-3-8b?)
2. The experiments are not comprehensive enough. For instance, the table 3 presents ALF-world's SOTA from results in paper from 2021 -- which I believe is quite out-dated and there are plenty of work that improve ALF's performance a lot. Also, the authors miss several important baselines, 1. RL -- since the policy evaluation + policy improvement is the basic foundation for RL. It is essential to compare the proposed algorithms with RL. 2. a bunch of work starting after reflection, that also uses verbal feedback to improve the performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. There are a few literatures that might be highly relevant to the paper, including:
[1] Shinn, Noah, et al. "Reflexion: Language agents with verbal reinforcement learning." Advances in Neural Information Processing Systems 36 (2024), which also studies how to use verbal descriptions as feedback to guide the LLM's policy.
[2] Feng, Xidong, et al. "Natural Language Reinforcement Learning." arXiv preprint arXiv:2402.07157 (2024), which is also motivated by verbalizing policy learning process.
[3] A bunch of work covering llm-as-judge, like: Wang, Yidong, et al. "Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization." arXiv preprint arXiv:2306.05087 (2023).
2. Why choose GPT-4-0315 to conduct experiments? This is a relatively old GPT4 model considering it's mid 2024 now? And is there any other ablation studies covering different types of llm?
3. By checking the prompt template shown in table 2, I have a question about: why there is no COT process between the model judge yes or no? Is it done on purpose or is there any explanation for that since in most setting COT can enhance GPT-4's performance -- and I believe this yes/no is very important since it directly influence the performance of policy improvement.
4. The comparison between LFMD and LFD or other baselines seems a bit unfair? Since you still require to run rollouts on test set while the other results are zero-shot.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weakness and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our paper and provide us with valuable insights. We appreciate the acknowledgement of the strengths of our paper, including its clarity, novelty, and demonstration of generality across a range of tasks.
## W1: improvement from distilling language feedback from stronger models
Our primary contribution is a novel framework that combines language feedback with policy improvement. While it is true that using a single model for both policy learning and language criticism is possible, our results show that a weaker feedback model results in insignificant policy improvement (we show results using Llama2 70B in the Appendix, which is better than Llama3 8b on most benchmarks: https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md#benchmark).
What our findings suggest are that
1. a very large model (let’s say an LLM) is difficult to use as a policy because it is expensive to train and slow to inference.
2. a small model can be trained to provide a reasonable tractable policy, however it is not capable of providing high quality feedback (without training on such feedback).
Our framework uses existing large LLMs to provide high quality feedback, without further training, to improve small tractable policies for specific environments we care about.
## W2: ALFWorld baselines
We also appreciate the feedback on the comprehensiveness of our experiments. Although Table 3 presents a paper from 2021, it is the best-performing imitation learning method without external knowledge on ALFWorld according to the ALFWorld authors. As mentioned in W1, we consider the setting where the LLM is not available during test time. Consequently we did not use reflective techniques nor CoT techniques that tend to show strong benefits with very large models (50B+, Li et al https://arxiv.org/abs/2306.14050). Are there specific works the reviewer would like comparisons to?
## W2: reinforcement learning
We reference prior results that use RL on ALFWorld and Touchdown (https://arxiv.org/abs/2110.10661). Crucially, we show that in these challenging environments, RL underperforms LFM (and the baseline model) after training for 10 million steps. In contrast, ALFWorld episodes typically have <30 steps. Training on the demonstration dataset amounts to 30 * 3.5k ~ 100k steps. LFM improvement using one rollout per training environment, as is the case in our experiments, results in another 100k steps. For Touchdown, episodes are typically <200 steps. Demonstration steps and LFM improvement steps are consequently 1.5 million steps each. For both these cases, imitation learning and LFM improvement require substantially fewer steps than RL and achieve substantially higher task success rate (e.g. 64 vs 23 ALFWorld, 60 vs 15 Touchdown).
## W2: reflection
We are very interested in adapting LFM to RL and with reflection, as the reviewer suggested. In this preliminary work, we wanted to scope our problem to investigate the most fundamental setting where the large model (the LLM LFM) is not used during policy improvement. Improving policies with LLMs in the loop (e.g. to provide rewards, to provide reflection) results in hundreds of steps per rollout, and is very expensive for environments we consider. However, we would like to investigate this direction in future work, for instance by learning a small language reflection model in parallel with the policy.
## Q1: references
We appreciate the reviewer’s suggestions of several relevant works, including [1], [2], and [3]. We agree that these papers share similar motivations and approaches with ours, and we will include them in our related work section. In the current manuscript, we specifically discuss the ReAct, Reflexion, and InnerMonologue line of work in the last paragraph of our related works section.
## Q2: GPT-4-0315
Unfortunately due to internal infrastructure policy it was the only GPT-4 model widely available to us at the time of our experimentation. We also experiment with Llama2 70B, the results for which are in our appendix. In ongoing and future work, we are investigating more recent LLMs, including VLMs, as policy critics.
## Q3: Chain of Thought
In this work, we make the assumption that the LLM is not available during test time, only training time. Current evidence suggests that CoT is only significantly helpful when the base model is sufficiently large (50B+, Li et al https://arxiv.org/abs/2306.14050). As mentioned in response to W1, a very large model (let’s say an LLM) is difficult to use as a policy because it is expensive to train and slow to inference. Consequently, in this preliminary work, we do not investigate CoT. In future work, we would like to study how to distill large base models into small models such that they are capable of providing high quality CoT. Our result on LFM-D is a first step towards this direction, where we ask the model to provide evidence for its critique, akin to asking a question answering model to provide reasoning steps for its answer.
## Q4: LFMD and LFD and baseline comparison
We want to make an important clarification that LFM and LFM-D do not perform rollouts on the test set - they perform rollouts only on the training set for policy improvement. The only method that performs improvement via test-set adaptation is LFM-A, which achieves significant improvement via adaptation compared to LFM. The comparison to LFM-D (D for descriptive) serves only to show that we can increase the interpretability of the feedback model without suffering performance degradation.
## Summary
We sincerely thank the reviewer for taking their time to help us improve this work. We hope we have addressed the reviewer’s concerns and questions. If so, would the reviewer please consider increasing their score to show support for our work?
---
Rebuttal 2:
Comment: Dear Reviewer,
We wanted to send a friendly reminder that we are awaiting your response. With the deadline for the reviewer-author discussion approaching on August 13, we would greatly appreciate it if you could provide feedback at your earliest convenience. We hope we have addressed your concerns and questions. If so, would you please consider increasing your score to show support for our work?
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Title: Last day of discussion
Comment: Dear Reviewer 8DkG,
Today is the last day for discussion. Would you please take a look at the author response before the discussion ends? We hope we have addressed your concerns and questions. If so, would you please consider increasing your score to show support for our work?
Thank you!
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Conditioning non-linear and infinite-dimensional diffusion processes | Accept (spotlight) | Summary: The paper attempts to derive a means of conditioning a nonlinear diffusion process upon function-valued observations, via the $h$-transforms, adapting the method of _Jeremy Heng, Valentin De Bortoli, Arnaud Doucet, and James Thornton. Simulating diffusion bridges with score matching._ to a function-valued setting.
Strengths: Discrete approximations of notionally continuous objects is a ubiquitous problem in machine learning. By representing conditional nonlinear SDE solutions themselves in function space, this expands the range and type of discretization that can be employed to solve problems which are naturally regarded as functions; in this paper, it enables the use of reasonably general (separable) Hilbert-space basis functions as the means of discretization, rather than, e.g. a raster grid.
This problem is interesting and well-posed.
Weaknesses: There are many small oddities in the style which make this paper a difficult read.
See below for those.
The paper presents essentially one result, which is the up-lifting of learned bridge diffusion on a finite dimensional vector space, to ones on a function space with a finitely-truncated basis. This result seems somewhat, if not massively, important.
The first two pages, before the problem statement, are confusing. If we read the paper in linear order we cannot understand many of the assertions made there without reference to equations which have not been introduced yet, and are not even cross referenced. e.g. l46/sect 2.1
>Given an SDE, the conditioned SDE contains an intractable score function. This is similar, but46
slightly different, to the score function that arises in generative diffusion models Vincent [2011],47
Song and Ermon [2019], Song et al. [2021]. There, the starting distribution is complicated, but the48
stochastic process is linear. In our case, we are interested in the process itself, particularly non-linear49
processes. In this way, our work generalises the finite-dimensional work on conditioning non-linear50
SDEs and infinite dimensional score matching, where they consider time reversals of linear SDEs
What is doing on? Which score function is intractable? There is a lot of this kind of thing where technical statements are made without reference to the equations that ground them. This becomes (more) clear after reading the whole paper, but in the order that the paper is written, this entire section is hard to parse.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. What is happening in figure 1? It is not easy to parse the start and end shapes. I can just about work it out from section 6.2.2, but can we add some visual cues in the figure, for example, fading out the starting shape as time does on, and fading in the terminal one?
> we first use two butterflies with somewhat320 different shapes [GBIF.Org User, 2024]. One trajectory between the two butterflies is plotted in Figure 2. In this, we can see the high correlation between neighbouring points, with a Brownian temporal model. In Figure 1, we plot 120 butterfly trajectories, at specific time points. For t = 0.2 we see that the butterfly outlines are mostly close to the start butterfly in pink, and at time t = 0.8, they are closer to the green target butterfly
2. 4.2/l159
> Moreover, under this measure $\mathbb{Q}, x(t)$ satisfies a new SDE
>$$
> \mathrm{d} x^c(t)=f\left(t, x^c(t)\right) \mathrm{d} t+\sigma \sigma^T\left(t, x^c\right) \nabla_x \log h\left(t, x^c(t)\right) \mathrm{d} t+\sigma\left(t, x^c(t)\right) \mathrm{d} W(t) .
>$$
Can you clarify the relationship between $x$ and $x^c$?
3. l165 confusing phrasing
> When $h(t, x)=p(t, x ; T, y)$ there is no general closed form solution. Different methods to learn the bridge exist Delyon and Hu [2006], Schauer et al. [2017]. More recently, score-based learning methods were proposed to learn the term $\nabla_x \log p(t, x ; T, y)$ Heng et al. [2021].
Do you mean something like this?
> For the required $h(t, x)=p(t, x ; T, y)$ there is no general closed form for $h$. Different methods to learn the bridge exist Delyon and Hu [2006], Schauer et al. [2017]. More recently, score-based learning methods were proposed to learn the term $\nabla_x \log p(t, x ; T, y)$ Heng et al. [2021], and it is the infinite-dimensional generalisation of the latter method that we pursue here
4. eq15:
>$\left\langle X(t, \xi), e_i\right\rangle=\left\langle\xi, e_i\right\rangle+\int_0^t\left\langle A X(s)+f(X(s)), e_i\right\rangle \mathrm{d} t+\int_0^t\left\langle e_i, B(X(s)) \mathrm{d} W(s)\right\rangle$.
Is something wrong with the variables of integration here? $\int_0^t\left\langle A X(s)+f(X(s)), e_i\right\rangle \mathrm{d} t$ is an integral in $t$ and yet the integrand doesn't depend upon t, and it does depend upon $s$ which is a free variable
Minor typos:
* l195
> However, transition operators of form Sec. 4.1 exist and satisfy the Markov property Equation (4)
should that be
> However, transition operators of form Equation (3) exist and satisfy the Markov property Equation (4)
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper seems to depend upon explicit orthogonal bases (sec 5.3) which is IMO a restriction in practice, since the diffusion methods of industrial interest frequently have no such explicit basis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive review and the concrete suggestions. We're glad you appreciate the importance of our result!
It seems to us that the weaknesses you listed were almost entirely presentational. We have fixed the issues you point out in your review as follows.
**Clarity of introduction and related work:**
Thank you for your feedback on the first two pages! We have now edited these sections, some examples of which we give in the following.
> Given an SDE...
Thanks for pointing out this paragraph! This section states that for given $T, y$ the score function $\nabla \log p(t, x; T, y)$ is intractable, since for nonlinear SDEs there is no known closed solution for $p(t, x; T, y)$.
We have now removed this paragraph and instead incorporate it into the related work on infinite dimensional diffusion models. The corresponding subsection in the related work now reads (previously lines 69-74):
> Recent work on generative modelling has investigated score matching for infinite-dimensional diffusion processes [Pidstrigach et al., 2023, Franzese et al., 2023, Bond-Taylor and Willcocks, 2023, Hagemann et al., 2023, Lim et al., 2023]. This problem is similar to our task of conditioning an SDE, but not the same: The main difference is that our SDEs are fixed, known a-priori, and potentially nonlinear, whereas in generative modelling the SDE can be chosen freely. Hence, generative modelling often uses linear SDEs because the transition densities are known in closed form. In this sense, our problem relates to generative modelling, but has a different setup.
The references to generative diffusion models have been moved to the paragraph about score matching for finite-dimensional nonlinear bridges (previously lines 61-68):
> Recently, Heng et al. [2021] adapted the score-matching methods of Vincent (2011), Song and Ermon (2019), and Song et al. (2021) to learn the score term for non-linear bridge processes. To do so, they introduce a new loss function to learn the time reversal of the process. They then learn the time reversal of the time reversal, which gives the forward bridge. Our work uses their method to learn the score term after discretising the SDE via truncated sums of basis elements. Phillips et al. [2022] also consider using truncated sums of basis elements for discretising SDEs, however, only for infinite-dimensional Ornstein-Uhlenbeck processes, which are linear.
We hope that these changes resolve your presentational issues, and that you agree with us that the presentation is improved.
**To answer your questions:**
1. Thanks for the suggestion. We've tried to make this figure clearer now (see PDF in the general reply). The caption in the PDF will also be included in the paper, which we hope gives additional clarity.
2. Thanks for the question. The process $x^c$ can be thought of as the conditioned version of $x$. For example if $h(t, x) = \frac{p(t, x; T, y)}{p(0, x_0; T, y)},$ then for a set $B\subset\mathbb{R}^d$, $\mathbb{P}(x^c(t) \in B) = \mathbb{P}(x \in B \mid x(T)=y)$ holds.
We've refined the corresponding explanation in Section 4.2.
3. Yes, this is precisely what we mean. We've edited it, so it's hopefully more clear now. The new version reads:
> For $h(t, x) := \frac{p(t, x; T, y)}{p(0, x_0; T, y)},$ as in conditioning on an end point, there is, in general, no closed form solution for $h$. Different methods to learn the bridge exist (Delyon and Hu [2006], Schauer et al. [2017]). More recently, score-based learning methods were proposed to learn the term $\nabla_x \log p(t, x; T, y)$ (Heng et al. [2021]), which we will adapt to the infinite-dimensional setting.
4. That was a typo, thanks for catching it! It is supposed to read
$$\langle X(t, \xi), e_i \rangle = \langle \xi, e_i\rangle + \int_0^t \langle AX(s) + f(X(s)), e_i\rangle \mathrm{d}s + \int_0^t\langle e_i, B(X(s))\mathrm{d}W(s)\rangle.$$
We have corrected this typo in the paper.
We would also like to clarify the limitation you mentioned. There are some non-standard Hilbert spaces without explicit orthogonal bases (e.g. Sobolev spaces on non-standard manifolds). However, when working in $L^2$ or Sobolev spaces on Euclidean spaces and spheres, one does have access to explicit bases, so this might be less of a limitation than indicated in the review.
We hope that you agree that the presentational aspects are now improved!
In any case, thank you for the positive evaluation and the concrete suggestions for how to improve the presentation -- we've updated the paper accordingly, as outlined above.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their explanations. Indeed, my concerns are mostly presentational; I think this is a good paper. The authors have addressed my questions. I have revised my score up accordingly. | Summary: This paper explores the conditioning of non-linear processes in infinite dimensions. To achieve this, the authors introduce an **infinite version of Doob’s $h$-transform** (contribution 1) that relies on the infinite-dimensional counterparts of Itô’s lemma and Girsanov’s theorem. They then discretize the conditioned process and use score-matching techniques to **learn the score** arising from the $h$-transform by training on the coefficients of the Fourier basis, which allows sampling from the conditioned process (contribution 2).
These mathematical tools are used to condition a process to hit a specific set at the end time, also known as bridges. The authors **detail two models** based on different scenarios: one for direct conditioning on data (**exact matching**) when the transition operator of the SDE solution is smooth, and the second for assuming some observation error (**inexact matching**) (contribution 3).
They **illustrate their procedure by modeling changes in the morphometry** (i.e., shapes) of organisms in evolution, specifically the changes in the shapes of butterflies over time (contribution 4).
Strengths: The theoretical mathematical contribution, namely the conditioning of non-linear processes in infinite dimensions, is noteworthy and broadens the scope of previous work that focused on approximating non-linear bridge processes in finite dimensions (Delyon and Hu, 2006, van der Maulen and Schauer, 2022). The infinite version of Doob’s $h$-transform, although not surprising in its form (similar to the finite-dimensional case), is of independent interest. The general procedure, based on this transform, allows conditioning without discretizing the model beforehand.
Two models are developed: exact matching and inexact matching. Inexact matching involves conditioning the process so that at the final time it does not exactly satisfy a final condition but approaches it. This approach is particularly interesting as it incorporates potential observation errors (by introducing noise) and relaxes the restrictive Assumption 3.3, which is unavoidable in the case of exact matching.
The application to modeling changes in morphometry, using bridge processes between shapes, is highly relevant for illustrating the usefulness of the developed procedure. Unlike previous work on the subject (Arnaudon et al., 2019, 2022), the conditioning precedes discretization, ensuring the proper definition of the bridge even as the number of points tends to infinity.
The overall presentation of the paper is excellent: the introduction effectively situates the study within the existing literature on related topics (approximation of non-linear bridge processes, learning score functions in generative diffusion models, diffusion bridges in shape spaces), the contributions are clearly outlined, and the tools developed (the infinite version of Doob’s $h$-transform) are introduced in a pedagogical and concise manner without sacrificing rigor.
Apart from a few minor confusions in the notation, the proofs seem correct and well-written.
Weaknesses: ### Assumption 3.3
As mentioned in the article itself, Assumption 3.3—indispensable in the case of exact matching—concerning the regularity of the transition function, is strong. An example of a subset $ \Gamma$ (finite-dimensional cylinder) matching this condition is provided in Section 5.3. It seems to me that the inherent difficulty of Assumption 3.3 for exact matching in infinite dimension is circumvented by choosing an example where the problem is ultimatly 'reduced to finite dimension'. Maybe having an example with conditions on the solution process itself $(X_t)\_{t\in\mathbb{R}_+}$ or the coefficients of the SDE it satisfies, which illustrate assumption 3.3, would be more interesting.
### Theoretical achievement
As it stands, the article develops an interesting method (though perhaps not completely groundbreaking) based on an extension of the Doob $h$-transform and reversing the usual discretization-conditioning steps, allowing for a well-defined bridge despite the difficulty associated with infinite dimensions. Perhaps a theoretical study of the error between the conditioning and the true solution would strengthen relevance of the approach.
### Notation
It's not really a weakness, but the notations should be harmonized (for example $ D_2 $ or $D_x$, $x_0$ or $\xi_0$) to make reading and reviewing the proofs easier. Perhaps a summary table of notations could be included?
Technical Quality: 3
Clarity: 4
Questions for Authors: The questions follow the potential identified weaknesses:
**Assumption 3.3** Can you illustrate it by providing conditions on the solution process $(X_t)\_{t\in\mathbb{R}_+}$ itself rather than on the set $\Gamma$? Perhaps using results on the regularity of the solution process density via the Malliavin derivative (S. Kusuoka and D. Stroock, "Applications of Malliavin calculus, part II", Kohatsu-Higa and Tanaka, Annales IHP 2012, D. Nualart, M. Zakai, Séminaire de probabilités 1989), or the parametrix method (Bally and Kohatsu-Higa, AAP 2015)?
**Theoretical analysis** Would it be possible to quantitatively measure the quality of the procedure, i.e. to provide an upper bound on the error between the conditioning and the true solution in the case of exact matching? In the case of inexact matching, can we measure the impact of the noise on this error?
**Proof of Lemma C.4** It seems to me that the proof of Lemma C.4 corresponds to the calculation of the infinitesimal generator associated with the process
$(X_t)\_{t\in\mathbb{R}_+}$ and not to what is stated. In the statement of the Lemma $h$ is defined as $h(t,\xi)=\mathbb E[\psi(X(T-t,\xi))]$ whereas in the proof, $h(t,x):=\mathbb E[\psi(X(t,\xi))]$. Adapt the proof maybe by defining $g(t,x)=h(T-t,x)$.
### Minor comments
There are some typographical errors and notational awkwardness. These notation issues recur repeatedly:
- l. 98, 137, 185 etc.: Write $W$ or $\{W_t\}$ instead of $W_t$ when dealing with a process. Same remark for $e^{tA}$ (l. 101).
\item l. 140, 147, 198 etc.: The initial condition of the SDE is denoted first by $x_0$ (Equation (1)) and later by $\xi_0$, $x$. Please harmonize the notation.
- l. 590, Equations (35) and (36): Partial derivatives can be denoted as $D_x$ or $D_2$. Please harmonize the notation.
Here is a non-exhaustive list:
- l. 80: $f(T,s_0)=s_1,f$. $f$ and ? I don't understand the sentence/
- l. 155: Add $x(0)=x_0\in\mathbb{R}^d$.
- l. 156: $X$ should be lowercase.
- l. 158: What is $p$ ?
- l. 158: I think we should have $\mathbb{E}[Z(T)]=1$.
- l. 159: I think is it $d\mathbb{Q}/d\mathbb{P}|\mathcal F_t$.
- l. 234: Equation (13), what is $v$?
- l. 553: Define $[L]$.
- l. 558 : How do you defined $D_2$ ? I think it's $\partial_x$ or $h_x$ ?
- l. 562: Using that $Z(t)=h(t,X(t))$
- l. 569: What is $H_Q$ ?
- l. 583: $Z(s)$ instead of $Z_s$
- l. 583 : $C_T=1/P_T\psi(\xi)$ by definition
- l. 590 equation (35), equation (36) harmonisation of notation with the reference lemma of Itô formula
- l. 599 : Lemma C.5, How do you define $c^i$ and l. 601 $c_i$?
- l. 618 : $d\widehat P=Z(T)d\mathbb{P}$ ($d$ is missing)
- Define properly the Hilbert space $Q^{1/2}(H)$.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are briefly discussed in the conclusion, particularly the fact that the procedure would not be applicable to weak solutions of SDEs. Some directions for future research (focus on network architecture to increase the dimension that can be considered, infinite-dimensional bridges to inference problems) are also provided. There is no potential negative societal impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and the insightful questions. We're glad you liked the paper and find the theoretical contribution noteworthy!
In the following, we address your questions one by one:
1. Assumption 3.3: Yes, you're right and we agree this would be nice to include!
We are aware of a result for SDEs of form
$$dX = [AX + F(X)]\mathrm{d}t + \sqrt{Q}\mathrm{d}W(t),$$
with $F$ being once differentiable, and $A, Q$ satisfying some extra assumptions (see Theorem 9.39/9.43 of [1] or Section 6.5, 7.3 of [2]; references below). We will include this in the discussion of Assumption 3.3.
In general, it would indeed be nice to prove something for other cases, especially in the case of stochastic flows as in [3], which could perhaps be done using the Malliavin derivative as you point out. We plan on looking into this for future work.
2. Theoretical analysis: We agree this kind of result would be interesting! However, deriving such an error estimate would require too much additional analysis for now, which is why we leave it to future work.
3. Proof of lemma C.4: Yes, you're right! We've corrected this now. Concretely: We define $g(t,x)=h(T-t,x)$ (where we before showed that $g$ is differentiable in time). Then, note that the function $t \to T-t$ is differentiable, and therefore $h(t, x)$ is, too. The spatial Fréchet differentiability still holds, since we showed it holds for all $t \in [0, T]$.
Thanks a lot for the feedback on the notation and for listing the small errors! We've now incorporated these into the paper and made sure all the notation is consistent.
We hope that you agree that this improves the presentation.
Thank you again for the positive review, and we look forward to further discussion!
[1] Giuseppe Da Prato and Jerzy Zabczyk. Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2nd edition, 2014.
[2] Cerrai, S., Second Order PDE’s in Finite and Infinite Dimension: a Probabilistic Approach, Lecture Notes in Mathematics. Springer, 2001.
[3] Hiroshi Kunita. Stochastic Flows and Stochastic Differential Equations. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1st edition, 1997.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the rebuttal
Comment: I thank the authors for their rebuttal. I remain confident of the quality of their paper, suggest the acceptance and keep my score. | Summary: This paper addresses the challenge of conditioning infinite-dimensional stochastic processes, particularly non-linear ones, without prior discretisation. Traditional methods condition finite-dimensional data but struggle with infinite-dimensional, function-valued data. The authors employ an infinite-dimensional version of Girsanov’s theorem and Doob’s h-transform to condition such processes. This method is applied to time series analysis of shapes in evolutionary biology, specifically modelling changes in the morphometry of organisms. The paper also utilizes score matching techniques to learn the coefficients of the score function in the Fourier basis.
Strengths: - The paper introduces a novel method for conditioning infinite-dimensional non-linear processes without prior discretization, generalizing recent work on linear processes in infinite dimensions.
- The authors derive Doob’s h-transform for infinite dimensional non-linear processes, allowing conditioning without first discretizing the model. Then, score matching is used to learn the score arising from the h-transform by training on the coefficients of the Fourier basis.
- The paper demonstrates a practical application to evolutionary biology to model changes in the shapes of organisms.
Weaknesses: - The empirical experiments focused specifically on modeling the change in the shape of butterflies. Thus, it's unclear how the method performs in more general benchmarks for diffusion processes.
- Computational complexity could be large for the the proposed method especially for the non-linear setting.
- The evaluation lacks necessary comparison with related approaches.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the computational complexity of the method?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive review!
In the following, we would like to briefly clarify the lack of related approaches (this work is, to the best of our knowledge, the first to operate in an infinite-dimensional, nonlinear setting)
and describe the computational complexity:
1. Comparison with related approaches / benchmarks: We are unaware of benchmarks respectively other approaches for nonlinear **and** infinite-dimensional processes -- we are only familiar with nonlinear (but finite-dimensional) (e.g. [1]) as well as infinite-dimensional (but linear) comparisons (e.g. [2]), neither of which directly compare to our work.
2. Complexity: Thanks for asking! The complexity of a single evaluation of the loss of a minibatch of $B$ trajectories with $N$ time steps and in dimension $d$ (which in this case corresponds to the number of basis elements) is $O(B \cdot N \cdot d^3)$. The cubic factor comes from covariance-matrix arithmetic, which is common for multi-output stochastic processes (both finite- and infinite-dimensional) (see e.g. [3]). We are planning on bringing that factor down in a follow-up project.
We thank you again for your positive evaluation! We hope that we were able to clarify a few points and we look forward to the discussion!
[1] Frank van der Meulen and Moritz Schauer. Automatic backward filtering forward guiding for markov processes and graphical models. arXiv preprint arXiv:2010.03509, 2022.
[2] Jakiw Pidstrigach, Youssef Marzouk, Sebastian Reich, and Sven Wang. Infinite-dimensional diffusion models for function spaces. arXiv preprint arXiv:2302.10130, 2023.
[3] James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for Big data. Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence. 2013.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: The authors addressed my comments, and I've raised my score accordingly. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their reviews and positive evaluations of our work. We are glad you all liked our contribution!
Many weaknesses seem to relate to presentational concerns, which we believe are easy to correct.
Below, we reply to all reviews in separate threads.
Attached is a PDF that contains an update to Figure 1, relating to the review by Reviewer veYd.
We look forward to the discussion!
Thank you again for the positive reviews, and best wishes,
The authors
Pdf: /pdf/6106d6b82b3f0bfebe59c120dd4f995f043a0919.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Affine Homotopy between Language Encoders | Accept (poster) | Summary: The paper introduces formal mathematical concepts of intrinsic and extrinsic homotopy between language encoders. The first concept compares the behavior of two encoders on a concrete dataset, while the second concept compares them independently from the concrete dataset. The paper also demonstrates how to apply these concepts to measure the difference between different versions of the pre-trained BERT model.
In more detail, chapter by chapter:
- The 1st chapter introduces the problem of comparison of text encoders.
- The 2nd chapter is devoted to the discussion of the formal definition of a language encoder. The authors define a language encoder $\mathbf{h}$ very generally: as a function from all possible strings of some alphabet $\Sigma$ to a $d$-dimensional vector space $V$. They mention some ways to train an encoder in practice; however, for the rest of the theoretical part of the paper, $\mathbf{h}$ remains general enough and is not required to be implemented through a neural network.
- The 3rd chapter discusses the formalism of hemi-metrics and introduces uniform convergence norms on the space of language encoders. The uniform convergence norm allows the authors to introduce an affine alignment measure that shows how aligned two encoders are with each other.
- The 4th and 5th chapters introduce the notion of Intrinsic and Extrinsic Affine Homotopy on encoders. It is based on notions of affine alignment measure from the previous chapter, and encoder rank.
- The 6th chapter is devoted to adapting the notions above to the real-life scenario, where we can't check the encoder performance on all possible strings of our alphabet. Finally, asymmetric extrinsic and intrinsic similarity measures, that are usable in practice, are derived.
- In the 7th chapter the authors measure an extrinsic and intrinsic similarity between pre-trained BERT models, all trained with similar hyper-parameters but differing in random weight initialization and shuffling of training data (see "The MultiBERTs: BERT Reproductions for Robustness Analysis" paper). They show that these pre-trained BERT models can be differentiated from each other by the methods, introduced in the paper. They also provide a table of the correlations between their notions of similarity and some previously introduced notions of similarity between encoders. All experiments were performed using SST-2 and MRPC datasets.
- Finally, in the 8th chapter they discuss relations between extrinsic and intrinsic dissimilarities, their asymmetry, and the finding that some BERTs are more similar to other BERTs by affine similarity in one direction, but not necessarily in another.
Strengths: - The paper introduces a novel way to compare the language encoders;
- An interesting and non-trivial mathematical tools are utilized.
Weaknesses: First of all, it looks like this paper is not a good fit for this venue by its very nature:
- It is unclear what is the contribution of this work to the field of Neural Information Processing Systems. The paper introduced many novel concepts, but most of these concepts, starting from the definition of the language encoder itself, are very general, so their connection with neural networks looks far-fetched. The usefulness and purpose of the experiments are also unclear (see "Questions" section).
- 9-page format is too small for this type of scientific work; authors had to overuse the Appendices to fit their work into limited space. When I was reading the paper, I had to constantly go around in circles, moving between Appendix D ("Addenda on Affine Homotopy") and the main part of the paper.
Second, there are some general problems with the research and text:
- The experimental section in general is very limited. Only various versions of the BERT models are compared with each other, and only two datasets are considered. In line 235 there is a note that experiments show some task-independency, but how can we say anything about task-independency when we have only two tasks on hands?
- The conclusion and findings of the paper are also unclear (see "Questions" section).
- The paper is generally hard to read and understand. As was told above, this problem is partially connected with the unsuitable format; however, it is also connected with typos (see below) and lack of needed definitions (see "Questions" section).
Technical Quality: 2
Clarity: 2
Questions for Authors: Questions:
- Why is your mathematical tool of checking, whether two encoders are similar to each other (line 153), called "Affine homotopy"? What is the connection to the notion of homotopy that we know from algebraic geometry?
- You found that some BERTs are more similar to other BERTs by affine similarity in one direction, but not in another, and that this fact says something about the "universality" of BERTs, that are easier to map "from". Can you please elaborate on this conclusion about their universality?
- By which machine learning algorithm is $\mathbf{Ð}$ approximated and how?
- What was the point of defining a dataset-independent way to measure the difference between encoders, if, starting from section 6, everything, in essence, became dataset-dependent again? (and of course, all experiments were dataset-depended as well)
- What is the purpose of the Figure 2? What can it tell about MULTIBERT encoders number $1, ... , 25$?
- In general, your paper doesn't explain what is the exact difference between, let's say, MULTIBERT encoder number $1$ and number $2$ (except that they are both BERTs with different weights init, etc). So what's the point of caring about their comparison in the first place? What useful conclusion can we make from such a comparison?
- What is the purpose of Table 1? If I understand correctly, it shows a correlation between different similarity measures, but what conclusion should we make from it?
- The most torturing question: what are $E_x$ and $E_y$, introduced in line 103, but never explained? I had to assume that they were just some neighborhoods of $x$ and $y$ to somehow get through the rest of the paper, but I'm not sure it's true.
Suggestions:
- I would suggest experimentally measuring the similarity of BERT with encoders of different natures, e.g. AlBERT, ELECTRA, and maybe even some LSTM-based encoders. Maybe the encoders of similar architecture (or trained on similar data) will be more similar by your similarity measures, than the encoders of very different architecture (or trained on very different data)? Or maybe not?
- I would suggest using more datasets for any further experiments.
- You heavily cite the "Metrics, quasi-metrics, hemi-metrics" chapter from the book called "Non-Hausdorff Topology and Domain Theory". However, this book is not well-known outside of the professional mathematical community. So, I would suggest explaining the conceptions from the book in more detail and providing some examples of hemi-metrics that would show why one wants to use hemi-metric instead of symmetric metric.
- Finally, I would suggest turning this work into a **journal paper** (after a revision) and merging Appendices D and B ("Additional Related Work") with the main part of the paper to make it more readable.
---
Typos and presentation issues:
- In 1st page, footnote №4, typo:
> In principle, one could relax the replace R^d with any finite-dimensional vector space
- In line 63, you wrote:
> There are two common ways that language encoders are created. The first is through autoregressive language modeling.
And the second one? Did you mean Masked Language Modelling?
- In line 119: typo:
> Let GL(V) We write
- In line 119, you introduced GL(V) without definition. I've supposed it's a General Linear Group for the rest of the paper.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors addressed the limitations of the theoretical part of their paper. I'd suggest adding the limitations of the experimental section (one architecture, a few datasets) as well (see above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and review our work – we are grateful for all the suggestions. Further, we are happy to hear that our usage of novel non-trivial mathematical tools for the analysis of the space of encoders is appreciated. Below, we address the concerns raised in the review:
- **“It is unclear what is the contribution of this work to the field of NeurIPS”**: Although we recognize that our usage of non-standard mathematical tools for NeurIPS, this is an attempt to more rigorously characterize the space of encoders as what they are – functions between countably infinite sets – and derive formal bounds that have concrete practical implications on downstream transfer learning behavior – and to that end, such tools are required. We further note that this paper follows a line of work, published at NeurIPS, on the similarity of neural representations (e.g., Boix-Adsera et al. (2022), Ding et al. (2021)) with extensions to the formalization of the problem as well as the objective of providing guarantees for extrinsic, downstream task behavior.
---
- **format, “hard to read and understand”**: We retain only the relevant interim results in the main text that contribute to the main results of the paper, leaving the proofs to interested readers in the appendix. We acknowledge the resulting density of §2-5 and would use any additional space to add examples and proof outlines to support better understandability for the broader ML community.
---
- **“Only various versions of the BERT models are compared with each other”, “What's the point of caring about [MultiBERTs’] comparison in the first place?”**.
As found in the original paper and as seen in the experiments in §6, MultiBERTs may behave very differently among themselves. More concretely, MultiBERTs show vastly different downstream task performance (Sellam et al. 2022) and produce representation matrices of different ranks to precision $\epsilon$ (cf. §6). These are important attributes of encoders that we aim to capture in our notion of similarity, which are often disregarded (as discussed in §8). As such, MultiBERTs offer themselves well to investigate which task-independent (intrinsic) relation between the encoders best captures extrinsic dissimilarity in practice – the subject of our study in §7. Still, we acknowledge that a study across different architectures would yield broader comparisons, and we plan on including such a study as well as more extensive dataset coverage to complement our current experimental section.
---
- **Connection to homotopy in algebraic geometry**: In classical algebraic topology, homotopy describes the continuous deformation of one function into another, typically parameterized by the interval $[0, 1]$. However, in this paper, we consider the concept of homotopy by using a broader set of parameterizations, denoted by $S$. This allows for more flexible transformations suited to specific classification problems. For instance, $S$ could be an affine space $Aff(V), Aff(V,W)$ or others, enabling more adaptable frameworks for continuous transformations.
---
- **“Universality of BERT”**: From Theorem 4.1, we derive the influence of the encoder rank on affine mappability. We confirm that affine mappability is strongly influenced by the encoder rank in our experiments in Appendix G. This leads to the conclusion discussed in §7, where we find that some encoders seem to learn lower rank representation matrices (to precision $\epsilon$), and this correlate significantly with how easily one can affinely map to that encoder, which influences their concrete utility for transfer learning by Lemma 5.1.1 and Def. 5.2. Considerations about the asymmetry are discussed separately in §8.
---
- **“How is Ð approximated”**: Ð is implemented as a gradient descent over affine maps (as mentioned in §6), where the loss is simply the max loss over all strings in the dataset. The gradient over the max is handled through subgradients in standard pytorch.
---
- **“Task-dependence of experiments”**: This is a valid concern and a limitation of the work, which we partially acknowledge in the “Limitations” section. We note, however, that the formalism of a language encoder is by definition a map between countably infinite sets (strings to vectors), and our work makes a first attempt to derive properties of such a construction, which, evidently, also have practical implications for the encoders' use in transfer learning (cf. Discussion and §7). The approximation to the finite-string setting both shows the applicability of theoretical results on real datasets and allows us to draw comparisons to existing methods in representational similarity.
---
- **“Purpose of Figure 2”**: As described in §3, Figure 2 plots intrinsic and extrinsic similarity, and, on a larger scale, shows an empirical approximation to the theoretical linear bound derived in Lemma 5.1. This does not tell us much about the encoders individually, but may act as a visual aid in seeing the empirical implications of the connections between our notions of intrinsic and extrinsic similarity.
---
- **“Purpose of Table 1”**: Correct. It shows the correlation between affine intrinsic similarity measures (cols) and extrinsic measures of similarity (rows). As in the experimental setup in Ding et al. (2021), this measures how strongly differences in extrinsic similarity are picked up by the intrinsic similarity measures. We find that Ð (as well as other linear alignment methods) tend to be strongly (and significantly) indicative of the extrinsic, downstream behavior, as theorized by the upper bound in Lemma 5.1.1.
---
- **“What are Ex and Ey”**: equivalent to the set E defined in Def. 3.3, Ex and Ey are non-empty subsets of X, and Y, respectively. We will add a comment, thanks!
We again thank the reviewer for their suggestions and hope that we could address most concerns and open questions about the soundness and overall contribution of the paper.
---
Rebuttal Comment 1.1:
Title: Please respond to authors' rebuttal
Comment: Dear Reviewer sF8t,
Thanks for your review. The authors have replied to your comment. Please engage in the discussion. After reading their rebuttal and other reviewers' feedback. Are you keeping the score unchanged and would you like to change your score?
Thanks
AC | Summary: In this paper, the authors study a nature question "What does it mean for two encoders to be similar" and proposed an intrinsic measure of similarity that aligns with extrinsic performance on downstream tasks. It introduces the concept of affine alignment and explores its properties and implications for understanding encoder relationships.
The main contributions are as follows:
1. The authors first define an metric space on encoders.
2. The authors extend the definition to account for transformations in a broad framework of $S$-homotopy for a set of transformations $S$.
3. As a concrete application of the framework, the authors study affine homotopy—the similarity for affine transformations.
Strengths: 1. The idea of the paper is quite novelty. Homotopy is a very important tool in algebraic topology. The application of homotopy theory in machine learning is very attractive.
2. The authors first define an (extended) metric space on encoders and then extend this definition to account for transformations in a broad framework of $S$-homotopy for a set of transformations $S$.
Weaknesses: 1. The motivation for introducing affine Hemi-Metrics to measure the similarity of two encoders is not so clear to me.
2. The definition of Extrinsic Homotopy seems not like a mathematical definition since authors does not claim that how to quantify the performance.
3. More experiments are needed for supporting the claim in the paper.
4. Such method seems not practical.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Please explain clearly the motivation why we need Hemi-metrics, I think maybe from both sides of topological view and machine learning view.
2. Please explain the connection between intrinsic homotopy and extrinsic homotopy. I think it can help us understand theorem 5.1 better.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review and helpful feedback. We are pleased to read that you appreciate the novelty of the approach and the breadth of the ideas introduced in the paper. Below, we address the specific concerns raised in the review:
1. **“Elaborate your Motivation of using Hemi-Metrics”**: Although some previous work has explored measuring similarity in proper metric spaces, we posit that purely from a machine learning perspective, there is an asymmetry to the problem of measuring the similarity between encoders. More concretely, when we assess the similarity of two encoders in terms of how closely we can affinely map their classification probabilities on a task, we find this to be asymmetric theoretically as well as practically. Some encoders may be more powerful than others (e.g., think of a lower-rank encoder, such as the ones generated in Appendix G). A symmetric distance, therefore, is not practical as it does not capture this phenomenon. This motivates using Hemi-Metrics.
---
2. **“Extrinsic Homotopy does not seem like a mathematical definition”**: We agree that Definition 5.1 is somewhat informal. We will explicitly state this in the next revision – thank you for pointing this out. We note, however, that we provide a formal mathematical definition of extrinsic homotopy in Lemma 5.2 similar to intrinsic affine homotopy in Def. 4.2.
---
3. **“More experiments are needed to support the claim”**: We would like to highlight that most theoretical results/claims (Thm. 4.1, Lemma 5.1, and Thm. 5.1) were evaluated empirically across a large number of encoders that exhibit significant differences in properties and downstream task behavior (cf. §6, [1], [2]). Still, in light of this and other reviews, we acknowledge that an evaluation across more datasets may help support the claim, which we will include for a next revision – we thank the reviewer for the suggestion!
---
4. **“Such method seems not practical”**: Although we acknowledge the gap between some properties derived in §2-5 with the experimental results (see “Limitations” Appendix A), our derivation of intrinsic similarity measures that upper bound extrinsic dissimilarity (Lemma 5.1, Def. 5.2) is purely practically driven. Namely, we show that our measure of intrinsic similarity is indicative of how an encoder may perform across downstream tasks in the transfer learning setting. Further, our discussion of the algebraic properties of affine homotopy gives us a rich theoretical foundation useful to understand phenomena we observe empirically (cf. §7-8), such as:
- The theoretical upper linear bounds of intrinsic similarity on task performance (Lemma 5.1, Def. 5.2) surface empirically, and therefore show us how our intrinsic measures of similarity can indicate downstream task performance similarity across arbitrary tasks. This is of significant practical interest for encoders’ utility in transfer learning, as this may reduce the need to do task-specific evaluation, as motivated in §1.
- Our proof about how affine mappability is affected by the encoder rank surfaces in our experiments §7 - “The Influence of Encoder Rank deficiency” as well as in our additional experiments in Appendix G.
---
Questions
----
1. See point 1. above
---
2. **“Explain the connection between intrinsic homotopy and extrinsic homotopy [Thm. 5.1]”**: We prove in Lemma 5.1 that for some fixed linear classifier $\psi’$, the extrinsic dissimilarity (i.e., how closely we can map the output probabilities of representations of encoder h with the ones from encoder g) is linearly bounded by the intrinsic similarity measure (Eq. 9b). Theorem 5.1 makes a stronger statement and states that the Haussdorff-Hoare variant of the intrinsic distance upper bounds the extrinsic dissimilarity over all possible linear classifiers. This shows that our methods of measuring intrinsic homotopy linearly upper bounds, and is therefore indicative of extrinsic homotopy. The practical implications of this are significant, as this shows we can derive encoder similarity measures that are indicative of their downstream task behavior, without the need for task-specific evaluation.
We again thank you for the insightful suggestions and hope that our clarifications address your concerns about the overall contributions of our work.
---
[1] Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and earlystopping.
[2] Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D’Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, et al. "The MultiBERTs: BERT reproductions for robustness analysis." In ICLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for authors' reply which answer my part of questions. But it still makes me unclear the connection between intrinsic and extrinsic homotopy since you also mentioned the similarity between Lemma 5.1 and Def 4.2. I am wondering if I understand it right that extrinsic homotopy is the upper bound of intrinsic homotopy?
Also, for the motivation of Hemi-Metric, you also mentioneed "a symmetric distance, therefore, is not practical as it does not capture this phenomenon. " But the Hemi-Metric is symmetric, or a metric should be symmetric, right? I may not get your point, please explain it clearly?
---
Reply to Comment 1.1.1:
Title: Replying to Official Comment by Reviewer nK2G
Comment: Thank you for taking the time to read our response and reply.
- **"I am wondering if I understand it right that extrinsic homotopy is the upper bound of intrinsic homotopy?"**: No, Lemma 5.1.1 and Theorem 5.1 both provide a linear upper bound on **extrinsic** affine homotopy measures by **intrinsic** ones (not the other way around). More specifically, the notions of intrinsic homotopy provide an upper bound of specific extrinsic ones for a fixed task; Lemma 5.1.1 and the worst-case task; Thm. 5.1. Note that in this way (and not the other) we can make guarantees about (extrinsic) downstream task performance dissimilarity from intrinsic measures of similarity — which is what we aim to achieve.
---
- **"But the Hemi-Metric is symmetric, or a metric should be symmetric, right? I may not get your point, please explain it clearly?"** No, compared to metrics, hemi-metrics are, by definition, not symmetric (see Def. 3.2; "Hemi-Metrics", in contrast to Def. 3.1; "Extended Metrics"). Our point is that to derive the intrinsic similarity between encoders that is indicative (cf. Lemma 5.1.1, for instance) of their extrinsic similarity ( = the closeness with which another encoder’s task classfication probabilities can be matched affinely), we do not want symmetric measures, as this would not capture the directionality of the problem. The directionality in this context means that one encoder may be more powerful than another as you can affinely match another encoder’s output probabilities closely, whereas the inverse may not be possible (cf. experiments on rank-deficient encoders, Appendix G).
We hope this helped clarify your concerns! | Summary: The paper aims to formally define, derive and then analyse similarity between pretrained language encoders, focusing on aspects of intrinsic (task-independent) similarity and extrinsic (task performance-oriented) similarity. The paper is mostly of theoretical nature, aiming to properly and formally define the studied aspects of similarity and then aiming to propose the idea of transformations in an (affine) homotopic framework. The work makes a step towards more formal studies of representation similarity within language encoders (and probably decoder-style LLMs in future work).
The main non-theoretical finding, based on experiments with MultiBERT models (i.e., BERT models trained from different random seeds) is that there exists (as expected) a correlation between the defined intrinsic and extrinsic notions of similarity.
Strengths: - The paper provides a fresh perspective on the question of similarity between language encoders (and language models) in general, aiming to rigorously formalise and derive different properties associated with intrinsic and extrinsic similarity.
- The paper is quite dense but admirably well-written given its largely formal and 'math-heavy' content. I see it also as a potentially very didactic piece of work which could inspire additional work in this space.
- Related work, limitations, implications of the key results (both from the theoretical as well as from the more practical perspectives) are all very comprehensive and nicely structured.
Weaknesses: - The work is heavily focused on theory and theoretical contributions; this means that its more practical findings are the weaker part of the work and the experimental setup and results are a bit underwhelming:
a) The only studied architecture is the BERT architecture, and the work just aims to quantify correlation between intrinsic and extrinsic similarity for MultiBERT models (which just use different random seeds). As far as I am concerned, a positive correlation between intrinsic and extrinsic similarity is very much expected for this group of models.
b) The work should ideally study other encoder architectures and aim to establish how the notion of similarity changes over the spectrum of 'expected model distance' and what implications it might have. Speaking of the 'expected model distance', what one could/should study here is:
-- models of the same architecture starting from the same seed where different checkpoints were taken
-- models of the same architecture with different random seeds (this is the only thing studied in the paper atm)
-- models of similar architecture but not completely the same (e.g., BERT vs RoBERTa)
-- completely different architectures (e.g., BERT vs tELECTRA)
c) The work also focuses on very basic GLUE tasks which also limits the generalisability of the main findings, and additional experiments over tasks of different complexity are required here to fully trust the core ideas (which would also increase the impact of the work substantially imo).
- While mathematical rigor in the paper is very useful, the paper would contribute from a short (sub)section aiming to properly 'distill' the key take-home messages in a plain(er) language so that it also becomes more obvious how future work could contribute from the more theoretical insights (e.g., can the derived measures be used for computing representation similarity in general)? That would be a very useful addition to the work.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Beyond page limit, is there any other reason why only two tasks (and very simple ones) are targeted for the main experiments?
- Have the authors considered running the similarity analyses also with BERT models having different hyper-parameters? Can we expect the affine homotopy properties to hold for such models? What implications might this finding have?
- I am missing the reason why certain encoders end up being more informative than others. How do you define 'being informative' in this context? Why is it important?
- Would it be possible to derive similar measures for decoder-only LMs in the future?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are (to most part) properly discussed in Appendix A. Some additional reflection on the current limitations of experiments (and practical aspects of the work) might be necessary and useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our work, as well as for their thorough review and helpful feedback. We are very happy to hear the positive comments about our formalization of the problem and our writing. We address the open points and concerns raised in the review below:
- **"The work should ideally study other encoder architectures and aim to establish how the notion of similarity changes over the spectrum of 'expected model distance' and what implications it might have"** We fully agree that such study across encoder architectures and additional tasks would complement our current experiments and we plan to add this for the next revision of the paper – thank you for the suggestion!
---
- **"The only studied architecture is the BERT architecture [...] which just use different random seeds"**: Although we fully agree with the point raised in b) to get a broader coverage of results by further studying different encoder architectures and running experiments across more datasets (which was solely limited by page limits), we would like to restate the motivation and significance of the found results mentioned in §1. Namely, as found in the original paper and as seen in the experiments in §6, MultiBERTs may behave very differently among themselves. More concretely, MultiBERTs have been shown to produce embeddings that yield significantly different downstream task performance (Dodge et al. 2021 [1], Sellam et al. 2022, [2]) and produce representation matrices of different ranks to precision $\epsilon$ (cf. §6). These are important attributes of encoders that we aim to capture in our notion of similarity, which are often disregarded (as discussed in §8). As such, MultiBERTs offer themselves well to investigate which task-independent (intrinsic) relation between the encoders best captures extrinsic dissimilarity in practice – the subject of our study in §7.
---
- **“A positive correlation between intrinsic and extrinsic similarity is very much expected for this group of models”**: our derivation exactly shows that specifically our notions of intrinsic similarity will be correlated with the extrinsic similarity **independent** of the considered group of models. The linear upper bound on the extrinsic dissimilarity by our notion of intrinsic similarity does not make any assumptions about the nature of the encoder, and we therefore expect such correlations to hold across encoder families.
---
- **“Have the authors considered running the similarity analyses also with BERT models having different hyper-parameters?”** As mentioned above, we did not make assumptions about the underlying encoder structure to derive the upper bound on the extrinsic similarity. Although we expect gaps in similarity to become more prominent, we do not expect these to affect the correlation between intrinsic and extrinsic similarity.
---
- **“I am missing the reason why certain encoders end up being more informative than others.”** We refer to the fact that in theory (cf. Thm. 4.1) higher-rank encoders are more powerful, as we can exactly affinely map from their image into the image of a lower-rank encoder, whereas the inverse does not hold. We also find this to surface empirically (cf. §7, “The Influence of Encoder Rank Deficiency”), and it has a significant impact on the affine mappability between the representation spaces (intrinsic similarity), and, as a result, task performance and the extrinsic similarity.
---
- **“Would it be possible to derive similar measures for decoder-only LMs in the future”** Although our method discusses encoder functions motivated by their usage for transfer learning, we do not in principle make assumptions about the nature of the function. In that sense, decoder representations may be evaluated equivalently.
We again thank you for the insightful suggestions and hope that our clarifications address any concerns about the overall contributions of our work.
---
[1] Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and earlystopping.
[2] Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D’Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, et al. "The MultiBERTs: BERT reproductions for robustness analysis." In ICLR, 2022.
---
Rebuttal Comment 1.1:
Title: I appreciate the response...
Comment: ...and the clarifications provided. I would keep the current score, as my questions were mostly resolved 'theoretically' without providing additional empirical evidence on some of the claims from the responses. | Summary: This paper presents theoretical analysis on "intrinsic alignment" of two or more pretrained language encoders (e.g. BERT trained with various seeds). The work proposes computing the intrinsic alignment between two encoders by first defining a algebraic metric space in which these two encoders exist, and then looking at the affine homotopic transformations that are possible in the given space. The primary motivation of the work is to set theoretical bounds on the similarity of two given encoders, and for the intrinsic alignment to have a strong (positive) correlation to extrinsic alignment (in simple terms, whether two encoders produce similar outputs for similar inputs). The work claims that this is important to have an "elementary understanding" of these encoders, and that having these theoretical guarantees can help derive a richer set of properties of the relationship between encoders.
The work first proposes the methodology to compute these alignments, and then conducts experiments on 25 multiBERT encoders, which are BERT models trained with varying seeds. Two GLUE classification tasks are used, SST-2 and MRPC. The results show that that such an alignment can indeed be computed, and that there is a positive correlation between the intrinsic and extrinsic alignments.
Strengths: - The work shown in the paper approaches some of the nuances of modern LM training (randomization, seeds etc) from a theoretical perspective, which can not only help solidify our understanding but also give us concrete bounds and limitations when these models are trained
- The results and discussion emerging from the analysis seems sound, and something that can potentially advance our understanding of these encoders further
Weaknesses: - While the paper makes some effort to depict the practical value of the underlying method, it falls short of giving actual examples. For instance, the paper mentions that deriving intrinsic alignment can help discover "richer" properties - but for a reader (that does not necessarily have theoretically expertise), it is difficult to see what these richer properties look like
- There is also very little discussion of practical aspects of running this analysis; The appendix mentions that intrinsic alignment is more expensive to compute than extrinsic alignment; given that the aim of the work is to set upper bounds, why not just compute the extrinsic scores? I was also unable to find any discussion on whether different kinds of encoders can be compared (which is easy to do for downstream tasks).
Technical Quality: 3
Clarity: 2
Questions for Authors: - What are some of these "richer" properties that intrinsic alignment can help discover?
- Can these be run across encoders with different architectures?
- The paper mentions that one advantage of intrinsic alignment is the possibility to define an order over the encoders. What does this imply?
- I am wondering if these alignment measures can somehow be used to improve pre-training; lets say if we can compute the similarity between a well trained model and one that is in the training loop, and make training decisions based on how close/far the alignment is. Do you envision any such use of the proposed metric?
- Is there a way to convert the alignment metric into a measure of "badness" for a given encoder? I envision it would be useful to weed out "bad" models than to just compare "good" ones.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Authors have addressed limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review and the helpful feedback and ideas they provided! We are happy to hear that our theoretical results as well as practical implications are appreciated. In the following, we address the open points and concerns:
- **“It is difficult to see what the richer properties are, but for a reader (that does not necessarily have theoretically expertise), it is difficult to see what these richer properties look like”**: What is meant by “richer properties” in the introduction refers to our much more general theoretical results and the empirical implications that result from formalizing and studying encoders as what they are – functions between countably infinite sets. For instance, our derivation for Remark 3.2 finds that only one-sided affine alignment (Eq. 9.2) is non-trivial, and further, defining hemi-metric spaces allows us to derive proper mathematical preorders and equivalence relations on the space of encoders, which (e.g., Thm. 4.1) has concrete empirical implications (cf. §7, “The Influence of encoder Rank Deficiency”). Further, our formalization allows us to explicitly construct the linear bounds of extrinsic dissimilarity not just for a fixed classifier (Lemma 5.1.1), but for the “worst-case dissimilarity” (cf. Thm. 5.1) -- this highly practical, as it shows our intrinsic similarity measure to be indicative of how an encoder may perform across downstream tasks in the transfer learning setting. Still, we acknowledge the density of the theoretical sections, and as per recommendation by this reviewer and reviewer 6x9V we plan to create a distilled overview paragraph of the theoretical contributions in a next revision to appeal to the broader ML community.
---
- **“There is also very little discussion of practical aspects of running this analysis, The appendix mentions that intrinsic alignment is more expensive to compute than extrinsic alignment; given that the aim of the work is to set upper bounds, why not just compute the extrinsic scores?”**: The practicality comes exactly *from* the fact that we show that the intrinsic similarity measures can be already indicative of task performance – not just for a fixed task (Lemma 5.1.1.), but for the worst case task across all tasks (see Thm 5.1., as well as §8; “Implications of §5”). In other words, we show that computing our intrinsic measures of similarity may already indicate the overall utility of an encoder for *any* downstream task, thus potentially eliminating the need to do task-specific evaluation of extrinsic scores.
---
- **“Lacking a discussion of what kinds of encoders can be compared”, “Can these be run across encoders with different architectures?”**: Generally, we do not make any assumptions about the structure of the encoder in the derivation of our theoretical results, resulting in wide generalizability of our method. In other words, our method can be applied to measure the (intrinsic) similarity between *any* two encoders and evaluate how this affects how (extrinsically) similar they can be in terms of their output probabilities.
---
- **“What is the benefit of defining an order over encoders”, “alignment metric into a measure of "badness" for a given encoder”**: Purely mathematically speaking, a proper order over a set is a powerful structure that allows us to define proper equivalence relations over encoders. Beyond equivalence, the order between encoders (cf. Lemma 4.1.) indicates that we may be able to exactly affinely map from one image into the image of a lower-rank encoder, whereas the inverse does not hold. A higher-rank encoder is therefore more powerful, and intrinsic affine homotopy is indicative of this. We find this rank deficiency to surface empirically (cf. §7, “The Influence of Encoder Rank Deficiency”). By Lemma 5.1.1., this also strongly affects extrinsic similarity, i.e., how closely we can affinely reach the output probabilities of an encoder for a specific task from another encoder – yielding a measure of the “goodness”/”badness” of an encoder, in your words.
---
- **“these alignment measures can somehow be used to improve pre-training”**: this is an interesting suggestion, especially in light of the nuances in fine-tuning that have shown to have large effects on downstream task performance of the pretrained encoders [1]. We can add this as possible future work or motivation, thank you for the suggestion.
We again thank the reviewer for the insightful suggestions and hope that our clarifications address their concerns about the overall contributions of our work!
---
[1] Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and earlystopping.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response, it has helped me understand some of the concepts better. I still feel like the practical aspects are still unclear to me (e.g. the tradeoff of runtime vs guarantees, the implications of "good"/"bad" encoders), however, this may be because I am not very well versed on the theoretical side of algebraic spaces. I have gone through the other reviews, and will maintain my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Vision Foundation Model Enables Generalizable Object Pose Estimation | Accept (poster) | Summary: This paper introduces VFM-6D, a two-stage RGBD-based method for generalizable object pose estimation. Given a set of reference images depicting objects of the arbitrary category, the proposed method first estimates a viewpoint using image matching, then based on the NOCS map of this matched reference image, it estimates the NOCS map of the query image, which allows for a more accurate re-estimation of the 6D pose and 3D size of the input objects. The paper experiments on novel categories of category-level object pose benchmarks such as Wild6D, CO3D, and unseen instance object pose estimation LINEMOD, showing state-of-the-art results on these benchmarks.
Strengths: - S1: The paper presents a generalizable method, VFM-6D, that can be applied to both instance-level unseen object pose estimation and category-level object pose estimation for novel categories.
- S2: The paper is well-structured and easy to follow.
- S3: The experiments demonstrate that VFM-6D achieves state-of-the-art results on several benchmarks.
Weaknesses: - W1: The main contribution of the paper, category-level object pose estimation for novel categories when reference and query images are very different, is not well supported. There is no analysis to back up this claim, and experiments either do not show how the reference and query objects differ (Wild6D, CO3D) or the reference and query objects are the same (LINEMOD).
- W2: The quantitative results in Table 1, 2, and 3 are unclear. Some methods, such as PoseContrast, LoFTR, and LightGlue, are RGB-based, while VFM-6D uses both RGB and depth as input, which contains more information for estimating 6D pose. It would be helpful if the authors clearly stated the inputs used in each method for a fair comparison. Additionally, it is unclear whether these results come from re-training the baseline on the same training set or using available pre-trained models.
- W3: The paper does not clearly explain why a two-stage approach is necessary, as the first stage can already provide the 6D pose of the object via the pose of the nearest reference, which can serve as the pose prediction. If this is for pose refinement purposes, it would be well-motivated and much clearer if the authors showed the results of the first stage.
- W4: (Minor) The paper lacks implementation details, such as how 64 reference images are sampled (only cover out-of-plane rotation or both out-of-plane rotation and in-plane rotation), which tokens are being used in foundation features (class token or patch token), how the figure 1 is made and similarity is normalized.
Technical Quality: 2
Clarity: 3
Questions for Authors: All my questions are related to the weaknesses mentioned above:
- Q1: Can the proposed method work well on unseen categories when the geometry/texture of reference and query objects are different?
- Q2: Did the authors experiment with GPT-4V and Text-to-3D for generating 3D models and estimating 6D pose as shown in Figure 2? If yes, which models were used?
- Q3: How 64 reference views are sampled?
- Q4: Which tokens are used in foundation features?
- Q5: In Figure 1, did the authors use the same constant for normalizing the similarity between two methods, and is the similarity too low, between 0.006 and 0.0014 but not 0 and 1? Where is the GT view ID?
- Q6: Please clearly mention how the authors obtained the results for the baseline, and clearly mention the input used in each baseline.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the authors mentioned the limitations which is requiring the depth image as the input.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1: Can the proposed method work well on unseen categories when the geometry/texture of reference and query objects are different?**
**A1:** Thanks for the comment. We suppose that our presented experiments on Wild6D and CO3D datasets could address this concern. For Wild6D evaluation, we have tested our method on 162 different object instances in 5 unseen object categories. For CO3D evaluation, we have tested our method on 200 different object instances in 20 unseen object categories. In Fig.2 of the PDF file uploaded in the global response, we present some examples of reference and query objects used in Wild6D and CO3D evaluations.
- In terms of the texture difference, the reference object that we used is untextured, while the query objects in various testing scenarios usually have colorful and diverse textures. The texture difference between reference and query objects is significant.
- In terms of the geometry difference, since the precise CAD model for each object instance in Wild6D and CO3D is not available, we are not able to measure the geometry difference quantitatively. However, as depicted in Fig.2 of the uploaded PDF file, the geometry difference between reference and query objects is rather apparent. For example, the reference ‘mug’ can be used to estimate poses for a variety of mugs with different rim shapes and handle styles. Moreover, the reference ‘chair’ with four legs and no armrests can be used to estimate poses for various chairs with different leg structures and chairs with or without armrests.
We suppose that these results can demonstrate that our method can work well on unseen categories when the geometry/texture of reference and query objects are different.
>**Q2: Why is a two-stage approach necessary?**
**A2:** Thanks for the comment. In our ablation study, we have validated our designs on the first and second stages. Table 4 of our original paper demonstrates the significant performance improvement brought by each individual module. If we only use the first stage, the final pose accuracy would be sensitive to the number/pose distribution of sampled reference images. To explore this problem, we evaluated our method with different numbers of reference images. These result has been presented in Sec.F of the paper appendix. As shown in Fig.14 of the appendix, the second stage makes the final pose accuracy become robust to the selection of reference images. Moreover, Fig.14 also comprehensively compared the pose accuracy after the first stage and the second stage. The second stage can consistently improve the performance. These results demonstrate the advantage and necessity of the two-stage design.
>**Q3: The specific required inputs and evaluation settings used for each baseline.**
**A3:** Thanks for your valuable suggestion. To clarify, we list the involved baseline methods one by one:
- SPD, SGPA, DualPoseNet, and GPV-Pose: RGBD-based methods. We re-trained their models and evaluated them on Wild6D.
- PoseContrast: Use RGB images during object viewpoint estimation, and further leverage depth map to estimate complete object pose. We re-trained the PoseContrast model with our constructed synthetic dataset and then evaluated it on Wild6D.
- ZSP: RGBD-based baseline. Does not involve any training module and we directly evaluated it based on its official source code.
- LoFTR and LightGlue: As mentioned in L299-300 of the original paper, we need to use both RGB images and depth maps for category-level object pose estimation. We exploited their officially provided pre-trained models during experiments.
- GeDi: Point-cloud-based method. As a generalizable local deep descriptor for point cloud registration, we exploited its robust model pre-trained on 3DMatch during experiments.
We are sorry for the possible confusion caused. We will elaborate on the description for all baseline approaches in our revised version.
>**Q4: Did the authors experiment with GPT-4V and Text-to-3D for generating 3D models and estimating 6D poses as shown in Figure 2? Which models were used?**
**A4:** Thanks for your interest in our GPT-4V + Text-to-3D setting. We mainly tested this setting in practical open-world scenarios from the D3-Field dataset and RH20T dataset. Figure 9 of the main paper and Figure 15 of the paper appendix present the experiment results on various unseen object categories, including ‘shoes’, ‘flower pot’, ‘watering can’, ‘forks’, and ‘ladle’. For experiments on Wild6D and CO3D, we leveraged the collected shape templates for evaluation.
>**Q5: How are the 64 reference views sampled?**
**A5:** Thanks for the comments. The procedure of preparing shape templates can be divided into the following three steps: (1) Normalize the input object CAD model to have a diameter of 1; (2) 64 camera viewpoints are sampled from a sphere centered on the normalized object CAD model; (3) Rendering the corresponding RGB-D template image at each viewpoint with Blender. At each camera viewpoint, we also randomly sample the camera in-plane rotation within a specified rotation range (i.e., [$-10^\circ$, $10^\circ$]).
>**Q6: Which tokens are used in foundation features?**
**A6:** Thanks for the comment. For CLIP and MVP foundation models, we used their patch tokens. For DINO-v1 and DINO-v2 models, we developed our framework based on their ‘key’ tokens. We will incorporate these implementation details during revision.
>**Q7: Clarify Figure 1. How is the similarity score normalized?**
**A7:** Thanks for your careful checking. Given one query image and $N$ reference images, we would obtain $N$ raw similarity scores. We further normalize the similarity score via softmax. Note that a global scale factor of 0.5 is applied to the normalized score for the purpose of visualization, which does not affect the similarity distribution over reference views. Sorry for any confusion caused. We will further revise Figure 1 and its corresponding caption to improve its clarity.
---
Rebuttal Comment 1.1:
Title: Raising score
Comment: I would like to thank the authors for very detailed replies (both for me and other reviewers). The rebuttal addresses all of my concerns, and I am willing to raise my scores.
---
Rebuttal 2:
Title: Thank You!
Comment: Dear Reviewer gYMD,
Thank you for your feedback on our rebuttal. We are very glad that our response has fully addressed your earlier questions. Much appreciate your kind support for our work. We are grateful for your willingness to raise the score. Thanks a lot!
Sincerely,
Authors | Summary: The paper presents VFM-6D, a new framework for generalizable object pose estimation. VFM-6D integrates a 2D-to-3D feature lifting module and a shape-matching module, both of which utilize pre-trained vision foundation models to enhance object representation and matching accuracy. In open-set robotic manipulation scenarios, the authors use GPT-4v to generate mesh information from the prompts. The model is trained on synthetic data and demonstrates promising generalization capabilities in various real-world scenarios, as evidenced by evaluations on Wild6D and CO3D datasets.
Strengths: • This paper tackles a challenging but important problem in generalizable object pose estimation. Unlike previous approaches which require either a 3D mesh or reference images of the same object instance, the presented method can predict the 6D object pose given an RGBD image, utilizing category-level reference information.
• Since the appearance and shape of the reference instance could be different from those of the query instance, the pose estimation is quite difficult. The authors propose to handle this problem by performing template matching and registration, which is technically sound.
• The experimental results are promising. The method is trained on synthetic images, but tested on real unseen images. It outperforms some previous approaches on Wild6D and CO3D datasets.
• The authors leverage GPT-4v to generate reference information, which may facilitate the applications in real-world scenarios.
Weaknesses: • The method relies heavily on strong prior information, such as depth maps and NOCS maps. This requirement could limit its applicability in real-world scenarios.
• Some important details are missing. Specifically, how to perform the object shape focalization is not introduced; how to obtain the NOCS map for a reference image is unclear.
• The idea of utilizing foundation models is not very impressive. There have been some approaches that use foundation models for generalization object pose estimation. I would recommend rewriting the introduction, highlighting the importance and the novelty of the category-level setting.
• For query-reference image matching, some comparisons are missing. [1][2][3] use a patch-level template matching strategy for image matching, which does not rely on depth information but achieves good generalization ability.
[1] Nguyen, Van Nguyen, et al. "Templates for 3d object pose estimation revisited: Generalization to new objects and robustness to occlusions." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Zhao, Chen, Yinlin Hu, and Mathieu Salzmann. "Fusing local similarities for retrieval-based 3d orientation estimation of unseen objects." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[3] Örnek, Evin Pınar, et al. "Foundpose: Unseen object pose estimation with foundation features." arXiv preprint arXiv:2311.18809 (2023).
• The experiment on CO3D is somewhat unfair. All evaluated methods are based on a feature-matching technique. The presented method is trained on ShapeNet and tested on CO3D. Both of them are object-centric. However, other methods such as LoFTR and LightGlue are trained on scene-centric images, which may be not able to generalize to object-centric scenarios without finetuning.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to my comments in "Weaknesses"
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been thoroughly discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1: The method relies on depth maps and NOCS maps, which could limit its applicability in real-world scenarios.**
**A1:** Thanks for your comment. To address your concern, we explore the possibility of RGB-only pose prediction for our method. Due to the character limit, please refer to **Q1** in the global response for detailed evaluation results.
The NOCS map of a reference image could be directly computed based on the GT pose of the reference image. It does not require any other additional information. Please refer to our response to your Q3 for details of computing the NOCS map. In this regard, the use of the NOCS map would not be a limitation of our method in practical applications.
>**Q2: How to perform the object shape focalization?**
**A2:** Thanks for your comment. The focalization step takes as input the object point cloud in the camera frame and the initial object orientation estimated during the query-reference image matching stage. It aims to transform the object point cloud into a relatively canonical and normalized space to facilitate the subsequent object shape representation. Formally, the focalization step can be formulated as $\mathcal{X}'=\mathbf{R}^{-1}\times\frac{\mathcal{X}-\bar{\mathcal{X}}}{\max{\|\|\mathcal{X}-\bar{\mathcal{X}}\|\|}}$, where $\mathcal{X}$ denotes the original object point cloud in camera frame, $\bar{\mathcal{X}}$ denotes point cloud center, and $\mathbf{R}$ is the initial object orientation estimated during the query-reference image matching stage. We will elaborate on this part more clearly in the revised version.
>**Q3: How to obtain the NOCS map for a reference image?**
**A3:** Thanks for pointing it out. Each pixel in the NOCS map indicates a 3D coordinate value in the normalized object frame. For each reference image, the object scale $s$ and the ground-truth object pose $[\mathbf{R}|\mathbf{t}]$ should be known. For each pixel in the camera frame, we are able to transform its 3D coordinate into the object frame based on $[\mathbf{R}|\mathbf{t}]$ and then normalize the coordinate value with $s$ to obtain the corresponding coordinate value in the NOCS map. We feel sorry for the possible confusion and will elaborate on these details in the revised version.
>**Q4: Recommend rephrasing the introduction to highlight the importance and novelty of the category-level setting.**
**A4:** Thanks for the constructive suggestion. We are glad to see that the reviewer appreciated the importance of the problem that our paper targets and the novelty of our method from the category-level perceptive. We suppose that our method could be distinguished from other existing foundation-model-based works from the following aspects: (1) We show that by using our proposed adaptable framework, we can effectively enhance the 3D representation capability of the pre-trained 2D foundation model to handle intra-class shape variations of different objects for challenging category-level object pose estimation. (2) We demonstrate that the pre-trained foundation model can be effectively adapted to the task of category-level object pose estimation with cost-effective synthetic data and also keeps high robustness in handling novel object categories. We would follow your suggestion to rephrase our paper to highlight our contributions from the category-level perspective.
>**Q5: For query-reference image matching, comparison with patch-level template matching strategy [1][2][3] is missing.**
**A5:** Thanks for the comment. The first patch-level template matching method [1] you suggested is Ref [71] in our original paper (cf. L322 on page 8). We have compared this template-matching strategy with our proposed feature lifting module in our ablation study. Table 4 of the original paper reports quantitative evaluation results (i.e., ‘w/o feature lifting’ vs. Ours) and Fig.11 of the appendix presents qualitative evaluation results (i.e., column (a) vs. column (c)). We suppose that these results can demonstrate the advantage of our method over the patch-level template matching strategy. We will rephrase our presentations to present this part more clearly.
>**Q6: LoFTR and LightGlue are trained on scene-centric images, which may be not able to generalize to object-centric scenarios without finetuning.**
**A6:** We appreciate you bringing this up. We have followed your suggestion and fine-tuned LightGlue on object-centric images. Specifically, we leveraged the 120 object models we had collected from ShapeNet to synthesize object-centric image pairs for fine-tuning LightGlue. For each object instance, we sampled image pairs by rotating the camera within a range of 30 degrees. We generated 200 image pairs for each object instance, resulting in a total of 24K image pairs used to fine-tune LightGlue. The table below presents the evaluation results of the fine-tuned LightGlue on the CO3D dataset.
| | Pre-trained LightGlue | Finetuned LightGlue | Ours |
| :------: | :---------------------: | :-------------------: | :----: |
| Acc.$15^\circ$ | 5.9 | 9.8 | 50.2 |
| Acc.$30^\circ$ | 12.7 | 15.4 | 67.4 |
We found that for the task of category-level object pose estimation, the improvement from the object-centric fine-tuning was relatively limited. We suppose that the limitation of these feature-matching techniques is that they focus too heavily on low-level object features, and lack a sufficiently informative representation to perceive the object's shape and semantics, which makes them struggle with the challenging problem of category-level object pose estimation.
[1] Nguyen et al. Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions. CVPR 2022.
[2] Zhao et al. Fusing Local Similarities for Retrieval-based 3D Orientation Estimation of Unseen Objects. ECCV 2022.
[3] Örnek et al. FoundPose: Unseen Object Pose Estimation with Foundation Features. ECCV 2024.
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the authors for the rebuttal. Some of my concerns have been addressed. However, the experiment about depth maps and the explanation of the reference NOCS map are questionable to me.
In that experiment, the authors use the depth anything model to predict metric depth from an RGB image. The method is still based on RGB-D input, not only RGB images. The metric depth prediction is much more challenging than relative depth prediction, and it is hard to generalize to diverse scenarios. The reliance on depth information is still a limitation of this method. Besides, it is confusing that the method with the predicted depth performs much better in some cases than that with the GT depth.
The answer regarding the reference NOCS map seems wrong. The 2D-3D correspondences cannot be generated without depth.
---
Reply to Comment 1.1.1:
Title: Thank You and Our Further Response
Comment: Dear Reviewer heqf:
Thank you for your kind feedback on our rebuttal.
- We appreciate your further clarification on your question, which makes us more clearly understand your major concern regarding the depth map. We would like to gently clarify that our problem setting is different from what you indicate. Based on our understanding, you meant inputs are RGB images. In our setting, we used RGB-D data as the input, which is what has been usually assumed in the literature of category-level object 6D pose estimation. Upon re-interpreting your comments, we become aware that your problem setting is in fact a more challenging but meaningful one. We have newly searched the literature and interestingly found several very recent works investigating it. These methods [1,2,3,4] exploit RGB images to implicitly learn an object shape representation for object pose estimation, which are distinct from our framework. However, their current performance is still limited due to its challenging setting. Despite it being out of the scope of our this particular work, we sincerely thank you for your inspiring question, and we have re-considered the possibility of adapting our method to fit your more challenging problem setting. We conjecture that by changing our method’s depth-based explicit object shape modeling to RGB-based implicit shape modeling with latent feature volume[1,2] or implicit neural radiance fields[3,4], it is promising to effectively address the problem of object shape representation without any reliance on depth. This idea also aligns with the recent literature which addresses the same problem of RGB category-level object pose estimation as you mentioned. We very much appreciate your inspiring comments and will further extend our method to fit the more interesting setting, and thus making the pose estimation method even more generalizable to diverse scenarios in the future. We are also happy to include relevant discussions on this interesting point in our final version.
In addition, we would like to add clarifications to your question regarding the pose performance with predicted depth maps. Thank you for raising this interesting observation. After double-checking the experiments with the depth-anything model, the pose accuracy with the predicted depth map was found to be clearly higher than the one with the original CO3D depth map in 2 out of the 20 total categories (*i.e.*, chair and hydrant). After a meticulous review of the corresponding depth maps, we hypothesize that this may be due to the fact that, in comparison to the original CO3D depth, the predicted depth might provide a more comprehensive representation for certain chairs with backrests and handles that possess an openwork, perforated, or lattice-like structure. Furthermore, the predicted depth could potentially offer a smoother depiction for metal hydrants. Consequently, the predicted depth may be more capable of capturing the unique shape of the object for the category of 'chair' and 'hydrant', thereby resulting in improved pose accuracy. We are grateful for your insightful query and hope that this explanation offers a satisfactory response.
- We apologize if we initially misunderstood your concern pertaining to the NOCS map. Yes, in our RGBD-based setting, we require depth information to establish correspondences between the object frame and the camera frame for pose optimization. Due to the inherent size ambiguity of objects in category-level object pose estimation, we are unable to directly utilize the 2D-3D correspondences and solve the pose parameters via the Perspective-n-Point algorithm. Instead, it is necessary to turn to 3D-3D correspondences to concurrently recover scale and pose parameters. In an effort to diminish its dependence on depth information, recent studies have introduced intriguing solutions that rely on multi-view NOCS maps [5] or the decoupled metric scale recovery strategy [6] to mitigate the scale ambiguity. We are very much interested in exploring these promising solutions to further improve the applicability of our method in future works. We sincerely appreciate your understanding and patience in this matter, and we are grateful for your insightful observations.
Sincerely,
Authors
[1] Zhao et al., 3D-Aware Hypothesis & Verification for Generalizable Relative Object Pose Estimation, ICLR 2024.
[2] Felice et al., Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation, arXiv 2024.
[3] Li et al., NeRF-Pose: A First-Reconstruct-Then-Regress Approach for Weakly-supervised 6D Object Pose Estimation, ICCVW 2023.
[4] Saxena et al., Generalizable Pose Estimation Using Implicit Scene Representations, ICRA 2023.
[5] Chen et al., StereoPose: Category-Level 6D Transparent Object Pose Estimation from Stereo Images via Back-View NOCS, ICRA 2023.
[6] Wei et al., RGB-based Category-level Object Pose Estimation via Decoupled Metric Scale Recovery, ICRA 2024. | Summary: Authors introduce method for generalizable object pose estimation given RGB-D image. The approach builds upon DINOv2 model and adds two adapter blocks on top to accomplish better viewpoint estimation as well as object coordinate estimation. The resulting model is trained on synthetic data with contrastive loss combined with coordinate map loss. The model achieves good performance, while being generalizable to the unseen instances without any finetuning.
Strengths: The paper presents a simple and working idea to extend DINOv2 model to category-level pose estimation by adding two adapter blocks on top. The method is then thoroughly evaluated and ablated, giving hints on what is important and what is not in the pipeline. The method also works well in the standard BOP setup on LINEMOD, which highlights the generalization capabilities of the method.
Weaknesses: The Figure 1 is a bit misleading, while it is true that DINOv1 was not working out-of-the box for pose estimation, recent work such as [1] show that it can with some modifications. Why is DINOv1 used here and not much more performant DINOv2?
In the related work section authors omit some of the recent work on 6D pose estimation that are similar in nature to this method such as FoundPose [1]. This paper was released earlier than FoundationPose [2], however it is not discussed in this paper at all.
In comparison to BOP methods, the selection of baselines by authors is not fully thorough. For example, FoundationPose is mentioned however the method is not compared against it on LINEMOD.
At last, for the inference, no information about template preparation apart from L268-L269 on p. 6 is available. What kind of conditions are needed? What settings are used?
[1] Evin Pınar Örnek et al., FoundPose: Unseen Object Pose Estimation with Foundation Features, ArXiv:2311.18809
[2] Bowen Wen et al., FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects, ArXiv:2312.08344
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) For pose estimation tasks the method for sampling rotations from SO(3) can very strongly affect resulting performance. I was wondering what kind of SO(3) sampling is used in generated synthetic data?
2) Going to my last point in the weaknesses, can you please describe what exactly is the procedure to prepare the shape templates for the inference? What kind of setup is needed (textured vs. untextured, light sources, etc)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors adequately discuss main limitation of this method, mainly related to occlusions and requirement of depthmaps for input images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1: Clarify Fig.1. Why is DINOv1 used in Fig.1 and not much more performant DINOv2?**
**A1:** Thanks for pointing it out. In Fig.1, taking DINOv1 as an example, we aim to show that the pre-trained vision foundation model is not reliably used for category-level object pose estimation. This observation is indeed consistent across different representative vision foundation models, including CLIP, MVP, DINOv1, and DINOv2 (As shown in Fig.8 of the original paper). We attribute this to their limited 3D representation capability due to pre-training with text and 2D images. Based on this, we propose a foundation feature lifting module for more reliable object pose estimation, which has proven effective with different vision foundation models. To reduce the possible confusion, we will revise Fig.1 of the main paper to include results from different vision foundation models during revision.
>**Q2: The recent work FoundPose[1] is not discussed in the paper.**
**A2:** Thanks for your constructive suggestion. FoundPose[1] is a recent method for object pose estimation. Similar to FoundationPose[2], it aims to tackle the problem of instance-level unseen object pose estimation. The idea of integrating DINOv2 representation and bags-of-words descriptor is very impressive. During our submission, FoundPose is still an arXiv paper, while FoundationPose has been accepted by CVPR 2024. Therefore, we mainly focus on discussing published works while preparing the manuscript. We are also glad to find that FoundPose has been accepted by ECCV 2024 (the paper decision date is 1st July 2024). Note that our method is clearly distinguished from FoundPose by addressing a more challenging category-level estimation problem for unseen object categories. We will ensure to incorporate a discussion of FoundPose in the revised manuscript.
>**Q3: Comparison with the recent BOP method FoundationPose[2].**
**A3:** Thanks for your valuable suggestion. Considering that the main focus of our paper is category-level object pose estimation for unseen object categories, we compare our method with FoundationPose on the CO3D dataset. We utilized the official source code and pre-trained models from FoundationPose, adopting the model-based setup during inference. Both our method and FoundationPose use the same category-level shape templates for object pose estimation. The following table presents the evaluation results in terms of Acc.$15^\circ$ / Acc.$30^\circ$. Due to the character limit, we only report the per-category accuracy for 6 representative categories and the average accuracy across all 20 categories:
| | Motorcycle | Backpack | Bicycle | Teddybear | Car | Chair | Average |
| :--------------: | :-----------: | :-----------: | :------------: | :------------: | :------------: | :-------------: | :-------------: |
| [3] | 42.4/55.1 | 13.3/23.1 | 25.3/46.9 | 20.4/43.8 | 39.4/54.2 | 59.3/67.7 | 30.1/48.8 |
| Ours | 56.4/76.3 | 30.6/47.4 | 46.5/59.2 | 25.2/54.4 | 55.6/74.4 |72.1/86.8 | **50.2/67.4** |
As can be observed, our method demonstrates its superiority over FoundationPose in challenging category-level object pose estimation for unseen object categories.
>**Q4: The procedure of preparing the shape templates. What kind of conditions are needed and what settings are used to prepare the shape templates for the inference?**
**A4:** Thanks for the comments. The procedure of preparing shape templates can be divided into the following three steps: (1) Normalize the input object CAD model to have a diameter of 1; (2) $N$ camera viewpoints ($N$ indicates the number of template images for inference) are sampled from a sphere centered on the normalized object CAD model; (3) Rendering the corresponding RGB-D template image at each viewpoint with Blender. Our method only needs an **untextured object CAD model**. During rendering shape templates, we **fix the position of the lighting source** on the top of the object, with a **random lighting color** that is uniformly sampled within $[0.5, 0.5, 0.5]$~$[1.0, 1.0, 1.0]$. We will append these implementation details in our revised manuscript.
>**Q5: What kind of SO(3) sampling is used in generated synthetic data?**
**A5:** Thanks for the comments. We customized the synthetic data generation pipeline[3] that has been widely used in the BOP challenge to generate our category-level synthetic data. To generate diverse synthetic data, two aspects of pose sampling are involved: (1) **Object on-surface sampling.** The object would be placed upright onto a plane of the synthetic scene, and the in-plane position and orientation of the object would be randomly sampled. (2) **Camera pose sampling.** The camera location is first sampled around the object using the "uniform_elevation" sampling used in BOP. Then, the camera rotation is further determined by a point of interest which is randomly sampled from the scene, plus a sampled camera in-plane rotation within a specified range ([$-30^\circ$, $30^\circ$] used in our synthetic data). We will append these details related to synthetic data sampling in our revised manuscript.
[1] Örnek et al. FoundPose: Unseen Object Pose Estimation with Foundation Features. ECCV 2024.
[2] Wen et al. FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects. CVPR 2024.
[3] Denninger et al. BlenderProc: Reducing the Reality Gap with Photorealistic Rendering. RSS Workshop 2020.
---
Rebuttal Comment 1.1:
Comment: I thank authors for addressing questions and concerns of all reviewers. I believe with the questions addressed, this is a good paper and I'm updating my score to "Accept".
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: Dear Reviewer yRF6,
Thank you for your feedback on our rebuttal. We are pleased that our response has addressed your questions. Much appreciate your kind support for our work. Thanks a lot!
Sincerely,
Authors | Summary: This paper addresses the task of category-level object pose estimation for unseen object categories from paired RGB-D imagery. To deal with unseen object categories, the authors leverage a vision-foundation model (Dino-v2 in this case).
The entire pipeline works in two stages. Given an RGB-D input, the first step retrieves reference images with the closest cosine similarity with respect to the object features. This is done in order to retrieve the viewpoint R, t of the reference images. To enhance the 3D representation of this step, the authors introduce a 2D-to-3D feature lifting step. In a nutshell, this step utilizes 3 positional information from the object's point cloud and merging these with the foundation features and improve the retrieval step.
The second step performs the actual object pose estimation. The process involves transforming the query and reference object shapes into a normalized space and encoding them using a point cloud Transformer integrated with pre-trained image features. The final NOCS coordinates for the query are calculated using a softmax function on the product of the shape embeddings of both objects, combined with the reference's NOCS coordinates. Training the model was enhanced with Blender-generated synthetic data.
Experimental results demonstrate large improvements over SotA on category-level unseen object pose estimation (Tbl. 1, 2). Ablative studies motivate the use of the proposed modules.
Strengths: - The approach makes clever use of a foundation model and improves its shortcomings via 3D-aware feature transformation
- The approach greatly outperforms state-of-the-art on category-level unseen object pose estimation. It can also be used for instance-level unseen object pose estimation and remains competitive.
- I greatly appreciated the evaluation in open-world scenarios (Sec. 4.5) which demonstrates its use in real-world scenario and not just on isolated benchmarks.
- Ablative studies demonstrating the performance using various foundation models is informative in showing how the performance of a foundation model correlates with the downstream tasks.
Weaknesses: - The method relies on RGB images paired with depth images. This requires specialized sensors and reduces the applicability in the real world. I would've appreciated an ablative study in which RGB images alone was input into a depth-estimation network before passing it into VFM-6D. This could greatly enhance the strengths of the paper and potentially show that RGB-only predictions would be possible as well.
- The authors mentioned that the model is limited in the presence of severe occlusion. A plot show-casing this (e.g percentage of occlusion vs accuracy) would be beneficial to demonstrate the robustness/weakness of the model with respect to this aspect.
Technical Quality: 3
Clarity: 3
Questions for Authors: - VFM-6D largely dominates in terms of performance compared to related work. Yet in Tbl. 1 it performs worse in two ranges. Is this solely due to the seen/unseen difference? Since VFM-6D outperforms on the larger thresholds, what would the authors propose would push VFM-6D to be SotA on all thresholds?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors were very clear on the limitations of the model
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Q1: Ablation study on RGB-only object pose estimation.**
**A1:** Thanks for your suggestion. Due to the character limit, please refer to **Q1** in the global response for detailed evaluation results. As can be observed, the RGB-only variant achieves comparable pose accuracy on average when compared with the original RGBD-based performance. These results indicate the application potential of our method in scenarios that lack depth observation. Please also refer to Fig.1 of the PDF file uploaded in the global response for detailed qualitative visualization results.
>**Q2: Results under different percentages of occlusion.**
**A2:** Thanks for the good suggestion. First, we evaluated our method under different levels of occlusion on the LineMOD-Occlusion dataset. The table below reports the average recall of ADD(-s).
| | No occlusion | <30% | 30%~60% | >60% |
| :-------: | :------------: | :--------------: | :-----------------: | :--------------: |
| ADD-(s) | 90.3 | 87.4 | 69.4 | 32.4 |
Second, we conducted more evaluations on the CO3D dataset. Since scenes from CO3D are generally occlusion-free, we randomly masked out different percentages of image regions to mimic the effect of object occlusion. The table below reports the Acc.$15^{\circ}$ / Acc.$30^{\circ}$ results on 5 representative categories of CO3D.
| | No occlusion | <30% | 30%~60% | >60% |
| :----------: | :------------: | :---------: | :---------: | :---------: |
| Motorcycle | 56.4/76.3 | 54.9/70.1 | 43.3/64.1 | 32.3/51.0 |
| Chair | 72.1/86.8 | 68.2/82.0 | 55.3/74.1 | 38.4/57.7 |
| Bicycle | 46.5/59.2 | 45.9/60.5 | 42.9/61.6 | 39.0/60.2 |
| ToyPlane | 55.1/66.6 | 53.0/64.3 | 44.8/56.4 | 45.0/55.5 |
| ToyTrain | 61.9/80.2 | 61.6/77.1 | 46.3/76.6 | 40.9/71.1 |
These results show that our method is relatively robust to small and moderate occlusion percentages, and would suffer a performance drop when facing severe occlusions larger than 60% occlusion rate. Moreover, as we have discussed in the main paper, in practical application scenarios, this occlusion issue could be potentially addressed by active perception and mobile manipulation to find an occlusion-free viewpoint.
>**Q3: Clarify results presented in Table 1.**
**A3:** Sorry for any confusion caused. The evaluation of Wild6D contains comparisons with two groups of approaches. To improve the clarity, we have reorganized the original Table 1 into these two distinct parts to better communicate the Wild6D evaluation findings:
- **Comparison with conventional category-level approaches.** For the competing methods SPD, SGPA, DualPoseNet and GPV-Pose, we first trained their models on the training data containing the testing categories, and then tested their performance on Wild6D. For our method, we did not train it on data containing the testing categories, and directly tested its performance on Wild6D. The following table presents the comparative results. Conventional category-level approaches highly rely on category-specific training. They have to train and test their models on the same sets of categories. In this case, Wild6D is **not unseen** to them. In contrast, without training on the five testing categories, our method significantly outperforms these category-level approaches in $10^\circ2cm$ and $10^\circ5cm$ accuracy, and achieves comparable accuracy under more strict thresholds.
| | $5^\circ2cm$ | $5^\circ5cm$ | $10^\circ2cm$ | $10^\circ5cm$ |
| :---------: | :----------: | :----------: | :-----------: | :-----------: |
| SPD | 2.6 | 3.5 | 9.7 | 13.9 |
| SGPA | **20.1** | **27.8** | 29.0 | 39.4 |
| DualPoseNet | 17.8 | 22.8 | 26.3 | 36.5 |
| GPV-Pose | 14.1 | 21.5 | 23.8 | 41.1 |
| Ours | 19.3 | 21.6 | **34.9** | **44.2** |
- **Comparison with category-agnostic approaches.** The following table presents the comparative results. Note that all competing methods are not trained on testing categories. In this case, categories in Wild6D are **unseen** to all three methods. Our method outperforms the other two competing methods in all metrics.
| | $5^\circ2cm$ | $5^\circ5cm$ | $10^\circ2cm$ | $10^\circ5cm$ |
| :----------: | :----------: | :----------: | :-----------: | :-----------: |
| PoseContrast | 2.3 | 4.7 | 5.5 | 10.1 |
| ZSP | 9.6 | 12.1 | 16.6 | 23.0 |
| Ours | **19.3** | **21.6** | **34.9** | **44.2** |
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: I thank the authors for providing their rebuttal and clarifying all the questions I had. I suggest the authors include these additional results into the main paper as these are informative.
I think this is a solid paper and vote for acceptance as before.
---
Reply to Comment 1.1.1:
Title: Thank You!
Comment: Dear Reviewer P7Hz,
Thanks for your feedback on our rebuttal. We are glad that our response has addressed your questions. We will follow your suggestion to include those informative experimental results in our final version. We truly appreciate your kind support for our work. Thanks a lot!
Sincerely,
Authors | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for taking the time to review and provide constructive feedback. We are glad to see that most reviewers recognized the novelty of our method, the strength of our experimental evaluations, and the good presentation of our paper:
- “Presents a technically sound method to tackle a challenging but important problem.” (heqf)
- “Makes clever use of a foundation model and greatly outperforms state-of-the-art methods.” (P7Hz)
- “The method is thoroughly evaluated and ablated, highlighting its good generalization capability.” (yRF6)
- “Evaluation in open-world scenarios is greatly appreciated, demonstrating its application in real-world scenarios and not just on isolated benchmarks.” (P7Hz and heqf)
**(Q1: RGB-based ablation study.)** We also noticed that some reviewers provided good suggestions to evaluate our method under RGB-only setting (P7Hz and heqf). We followed the suggestion to conduct this ablation study. Specifically, we input the RGB image into the pre-trained depth-anything[1] model to recover the corresponding metric depth. Then, we input the RGB image and the recovered depth into our VFM-6D for object pose estimation. Without additional training, the table below reports Acc.$15^{\circ}$ / Acc.$30^{\circ}$ results on CO3D dataset.
| | Motorcycle | Backpack | Bicycle | Teddybear | Book | Car | Chair |
| ----- | ----------- | ----------- | ------------ | ------------ | ------------ | ------------- | ----------- |
| RGB-D | 56.4/76.3 | 30.6/47.4 | 46.5/59.2 | 25.2/54.4 | 41.9/43.5 | 55.6/74.4 | 72.1/86.8 |
| RGB-only | 49.7/68.4 | 26.4/48.5 | 44.6/52.5 | 29.0/56.9 | 38.5/39.7 | 43.1/63.3 | 81.5/94.4 |
| | **Handbag** | **Hydrant** | **Keyboard** | **Mouse** | **Toaster** | **Hairdryer** | **Laptop** |
| RGB-D | 75.5/89.5 | 35.4/91.6 | 57.1/57.1 | 38.3/57.3 | 44.2/47.7 | 63.0/85.2 | 96.8/97.5 |
| RGB-only | 40.2/77.3 | 61.2/98.9 | 29.4/48.8 | 23.9/52.5 | 39.1/43.9 | 52.8/70.6 | 85.2/98.2 |
| | **Remote** | **Toilet** | **ToyBus** | **ToyPlane** | **ToyTrain** | **ToyTruck** | **Average** |
| RGB-D | 33.3/36.8 | 47.1/75.1 | 24.8/53.4 | 55.1/66.6 | 61.9/80.2 | 43.4/67.8 | 50.2/67.4 |
| RGB-only | 22.5/39.2 | 53.5/80.1 | 21.3/46.9 | 46.0/63.2 | 51.9/79.5 | 37.7/62.0 | 43.9/64.2 |
As can be observed, our method can effectively leverage the depth map estimated from the RGB image for generalizable object pose estimation. Note that in the context of category-level object pose estimation, we usually jointly estimate object size and pose parameters. Based on the estimated metric depth map, our method can recover object rotation precisely and can recover object size and translation up to a global scale factor. The RGB-only variant achieves comparable rotation accuracy on average when compared with the original RGBD-based performance. These promising results indicate the application potential of our method in scenarios that lack depth observation. Please also refer to Fig.1 of the uploaded PDF file for detailed qualitative pose prediction results.
[1] Yang et al. Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. CVPR 2024.
Below we further reply to each reviewer's comments point-by-point. We hope that our rebuttal can successfully address the reviewers' questions, and we look forward to receiving the reviewers’ support for our work.
Pdf: /pdf/74673814931cef38b397d0679677b0a2e6e6471a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalized Linear Bandits with Limited Adaptivity | Accept (spotlight) | Summary: This paper addresses the generalized linear contextual bandit problem under limited adaptivity constraints. In a setting (M1) where the times for updating the agent's policy are predetermined, the first proposed algorithm B-GLinCB divides the entire timeline into batches, updating the policy at the end of each batch. B-GLinCB explores using a G-optimal design policy in the first batch and adjusts the arm set in subsequent batches based on estimated MLE parameters using samples obtained in the first batch. The algorithm guarantees a $\tilde{O}(\sqrt{T})$ regret when the number of policy updates is $\Omega(\log \log T)$, independent of the problem-dependent instance $\kappa$. In a setting (M2) where the agent can adaptively decide when to update the policy, the RS-GLinCB algorithm is proposed. RS-GLinCB uses two criteria to alter action selection: the first criterion allows a tighter estimation of the true parameter's derivative, and the second criterion achieves a $\kappa$-independent regret bound. The theoretical results of the proposed algorithms were supported through comparisons with baseline algorithms in logistic bandit settings.
Strengths: - The limited adaptivity constraint discussed in the proposed paper is crucial for many real-world decision-making problems. The authors extend results from linear reward models to non-linear reward models, with the proposed algorithms guaranteeing $\tilde{O}(\sqrt{T})$ regret under specific conditions.
- Notably, the leading term of the regret bound for the proposed algorithms is independent of problem-dependent instances. This is, to my knowledge, the first result showing kappa-independent regret bounds for GLM reward models, excluding logistic bandits.
- The proposed algorithms are computationally efficient since the number of samples used to estimate the reward parameter does not increase over time.
Weaknesses: - Although the proposed algorithm achieves a $\kappa$-independent regret bound, it requires prior knowledge of $\kappa$. The MNL contextual bandit (Perivier & Goyal, 2022) achieved a $\kappa$-independent regret bound without needing information about $\kappa$, which might be useful here.
- The proposed algorithm is computationally efficient concerning time $t$, but there is no explanation of its dependency on the dimension $d$. It lacks details on the computational complexity needed for calculating the optimal design and distributional optimal design at each time step.
* * *
Perivier & Goyal. "Dynamic pricing and assortment under a contextual MNL demand." Advances in Neural Information Processing Systems 35 (2022): 3461-3474.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In logistic bandits[2, 6, 7], algorithms achieve $\kappa$-independent regret bounds by leveraging the self-concordance property of log-loss. What specific challenges were encountered when extending this to GLMs?
2. In the experiments, it appears that ECOLog in [7] is a sub-algorithm for reward parameter estimation. If the comparison is with OFU-ECOLog, which also has a computational complexity of $O(\log t)$, what might explain the significant difference in execution times between OFU-ECOLog and RS-GLinCB?
3. Additionally, how does the execution time difference vary with increasing context dimensions?
4. Why were different values for the upper bound of the reward parameter S used in experiments (S=3 for logistic and S=5 for probit)?
- [Minor typos]
- The $\kappa$ in line 137 and the $\kappa$ in line 144 seem to refer to different concepts; should different symbols be used?
- Line 156: G-optimal design policy $\pi_G = \arg \min_\lambda \min_x \| x \|^2_{U(\lambda)^{-1}}$
- Algorithm 2, line 14: $\hat{\theta}_w\) → \(\hat{\theta}_o$
- Constraint in Eq. (6): $|| \theta - \hat{\theta}_w ||_V \le \gamma \sqrt{\kappa}$ → $|| \theta - \hat{\theta}_o ||_V \le \gamma \sqrt{\kappa}$
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have well-addressed the limitations and further research directions in Section 6.
The content discussed in this paper appears to have little to no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. We address the comments and questions below:
**Regarding Weakness 1**: *Prior knowledge of $\kappa$*.
We note that [7], in fact, assumes the knowledge of an upper bound on $\kappa$ in Procedure 1 for the non-contextual problem and while calculating $\textbf{V}^\mathcal{H}_s$ matrix for the contextual setting. We work under the constraint of limited adaptivity, and our algorithms use the value of $\kappa$ during the warmup round of B-GLinCB and for the first switching criteria in RS-GLinCB. We believe a $\kappa$-dependent warmup round is especially necessary for the batch algorithm.
As mentioned in response (1a) for Review wYhM, access to an upper bound on $\kappa$ suffices. In particular, as is standard in bandit literature (specifically, linear and generalized linear bandits), one can assume access to an upper bound on $\theta*$. Such an upper bound directly translates into the required upper bound for $\kappa$.
We thank the reviewer for pointing us to the relevant work of Periviar & Goyal, which we will include in the final version of the paper.
**Regarding Weakness 2**: *Dependence on context dimension $d$ in computational complexity*.
A detailed discussion on computational complexity is provided in Appendix D (lines 675-692).
Our results hold for GLMs, in general. Here, for any class of reward functions (e.g., logistic rewards) for which the specified convex optimization problem can be solved efficiently, we obtain a polynomial dependence on $d$.
**Regarding Question 1**.
Our novel instantiation with respect to self concordance is detailed in Remark 1.3 (Lines 93-98).
Our overarching contribution is the development of GLM bandit algorithms with a key focus on limited adaptivity. This required new algorithmic techniques (e.g., the $\kappa$ dependent exploration phase). To complement the algorithms’ design, the analysis required new ways to adapt the self-concordance property of the reward distribution (e.g. Lemma A.5, A.7). Further, new technical ideas were required both on the technical (e.g., Lemmas A.16 A.17) and analytic fronts.
**Regarding Question 2**: *"In the experiments, it appears that ECOLog in [7] is a sub-algorithm for reward parameter estimation"*.
We have discussed the superior empirical performance of RS-GLinCB in detail in Appendix D (lines 693-715).
There seems to be a factual oversight here: we do not use ECOLog as a sub-algorithm. In fact, ECOLog and our algorithms solve notably different optimization problems for estimating $\theta^*$.
In particular, ECOLog [7] estimates $\theta^*$ by optimizing a second order approximation of the true log-loss while we optimize the true log-loss; however, we do it less frequently. That is, RG-GLinCB solves a ‘larger’ convex optimization problem but less frequently. This results in a smaller overhead at the implementation level. By contrast, ECOLog solves a smaller optimization problem but does so every round. Moreover, ECOLog solves additional optimization problems every round for the adaptive warmup criteria (for estimating parameter ${\theta}_t^0$, ${\theta}_t^1$ and $\bar{\theta}_t$ in Algorithm 2). We, on the other hand, have a simpler warmup criteria (see Switching Criteria 1) that only relies on the arm geometry and does not require solving an optimization problem.
**Regarding Question 3**.
Space permitting, we will extend the experiments to highlight the dependence of the execution times on $d$.
**Regarding Question 4**: *Different values of $S$ for logistic and probit rewards*.
We observe that the empirical performance of RS-GLinCB is consistently better in terms of both regret and computational performance compared to the previous best logistic and other GLM bandit algorithms. The choice of exact parameter value $S, \kappa$ etc., is arbitrary. We will include a comparison with different values of $S$ in the updated version.
We thank the reviewer for pointing out the minor typos and will fix them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging the response. Do let us know if we can provide additional details which might support increasing your score. | Summary: The authors consider the problem of regret minimization in bounded generalized linear contextual bandits with limited adaptivity. Specifically, they consider two models of limited adaptivity: **M1** in which the update rounds must be chosen before the algorithm is run, and **M2** in which the algorithm can be adaptive as the algorithm proceeds. For **M1** the authors propose B-GLinCB that obtains $\tilde{O}(dRS\sqrt{T/\kappa^*})$, and for **M2**, RS-GLinCB that obtains $\tilde{O}(d \sqrt{T/\kappa^*})$. The efficacy of RS-GLinCB is shown numerically.
Strengths: - Clearly and well-written
- First $d \sqrt{T/\kappa}$-type regret that holds for generalized linear bandits *beyond* logistic bandits + $\mathrm{poly}(S)$-free leading term for generalized linear bandits
- Numerous interesting (and important) technical contributions were made to the algorithm design and the regret analysis.
Weaknesses: - The numerical experiments could benefit from more comparators, namely, randomized algorithms: Thompson sampling [1] (and its follow-up works, e.g., [2]) and recently proposed EVILL [3]. I know (and appreciate) that the authors' algorithms are primarily for the limited adaptivity scenarios. Still, given how one of the main contributions of this paper is state-of-the-art regret analysis, it would be important to have these (practically performing well) randomized algorithms as comparators.
- (Continuing from the first point) In the prior regret analyses of logistic bandits by [4], they obtained $\tilde{O}(d\sqrt{T/\kappa^*} + \kappa \wedge R_-(T))$, where $R_-(T)$ is some arm-geometry-adaptive term that can be much smaller than $\kappa$. As the algorithm here makes use of warmup, it must incur the worst-case geometry-dependent term. The authors should also compare with this algorithm for the sake of regret comparison.
- The algorithms involve an explicit warmup stage, which in practice may be not so good. This is shown in the logistic bandits experiments, where although RS-GLinUCB is good eventually, its warmup (that scales with $\kappa$) forces the algorithm to incur high regret in the beginning (til round ~10000).
- Some references on logistic bandits missing, namely [5] where in fixed arm-set setting, they proposed a Hessian-based optimal design (H-optimality), then a warmup-based algorithm was proposed that obtains $\tilde{O}(d \sqrt{T/\kappa^*})$ regret, which is $\mathrm{poly}(S)$-free.
- Although the regret of RS-GLinCB is indeed $\mathrm{poly}(S)$-free, it mainly relies on a nonconvex optimization (Eqn. (6)), and it seems that the tractable convex relaxation again introduces factors of $S$ (and $R$) to the leading term. (please correct me here if I'm wrong) -- This point should be made precise in the introduction.
[1] https://proceedings.mlr.press/v108/kveton20a.html
[2] https://arxiv.org/abs/2209.06983
[3] https://proceedings.mlr.press/v238/janz24a.html
[4] https://proceedings.mlr.press/v130/abeille21a.html
[5] https://arxiv.org/abs/2202.02407
Technical Quality: 4
Clarity: 4
Questions for Authors: - The authors mention that the optimization problem in Eqn. (6) is nonconvex, and a convex relaxation results in additional factors of $\mathrm{poly(R, S)}$. Do the factors appear in the leading term as well?
- Are the analyses and algorithms amenable to unbounded, generalized linear models that are self-concordant? For instance, Gaussian is self-concordant with a multiplicative factor of $1$ but is unbounded.
- For the algorithms, the authors use l2-penalized MLE, then project it to the S-ball if necessary. Why not just do l2-constrained MLE, as done in [6]?
- The authors used an optimal design based on $\lVert \cdot \rVert_{V_t}$, thus incurring explicit dependency on $\kappa$. Is there any way to make the warmup more efficient by considering geometry-dependent norm, e.g., $\lVert \cdot \rVert_{H_t}$ as in [5]?
- The authors stated that the ellipsoidal projection is the main technical novelty for obtaining $\mathrm{poly}(S)$-free regret for RS-GLinCB. Can this then be combined with prior UCB-based algorithms for logistic bandits (or GLM bandits) to obtain similar improvements? Or is it the case that such ellipsoidal projection *combined* with some other techniques for the limited adaptivity allows for $\mathrm{poly}(S)$-free regret?
- (minor) Can the intuitions and ideas from kappa-dependent warmup be used for best arm identification, e.g., [7]?
[6] https://proceedings.mlr.press/v238/lee24c.html
[7] https://proceedings.mlr.press/v139/jun21a.html
(If all my concerns are sufficiently addressed, I'm leaning towards further raising the score)
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and insightful review.
**Regarding Weakness 1**: *Numerical comparisons with DDRTS-GLB and EVILL*.
This is a useful suggestion. We will implement the additional empirical comparisons mentioned in the review.
On the theoretical front, it is, however, relevant to note that DDRTS-GLB is an O(T^2) computation algorithm with non-optimal regret ($\kappa$-dependence). EVILL is not comparable because that work deals with fixed arm sets, not contextual arms.
**Regarding Weakness 2**: *Regarding arm-geometry-adaptive term*.
Our analysis is not tuned to the arm-geometry dependent $R_-(X)$ term as found in [2] and [17]. This is primarily because our algorithms require a warm-up (implicitly in RS-GLinUCB and explicitly in B-GLinUCB), during which there is no control over the regret, even in the analysis. This conforms with existing literature, for example, [7], where the second-order term is not able to accommodate the arm-geometry dependent $R_-(X)$ term. As future work, it will be interesting to investigate algorithms that are efficient while being able to accommodate arm-geometry dependent regret for GL Bandits.
We appreciate this point and will highlight it in the updated version.
**Regarding Weakness 3**: *Warm-up in RS-GLinCB*.
The warm-up in RS-GLinCB is implicit and arm-geometry adaptive. The rounds in which warm-up occurs (i.e., when Switching Criterion I is triggered) are deterministic, based on the sequence of contexts $\\{ X_t \\}_{t \geq 1}$. This leads to warm-up only in certain rounds based on the geometry of the contexts, which is very different from warm-up in [7], where warm-up is decided based on the stochasticity of rewards as well. Therefore, in our experiments, we observe that after a few initial rounds, the regret is nearly constant, while other algorithms display a growing nature. For the logistic case, among the efficient algorithms available, RS-GLinUCB’s superiority is clearly established.
**Regarding Weakness 4**: *Prior works*.
Thank you for pointing out the relevant work of Mason et al. 2018. We will include it in our final version.
**Regarding Weakness 5 and Question 1**: *Convex relaxation and dependence on S*.
The reviewer is right that a convex relaxation leads to poly(S)-dependence in the first-order regret term. It would be interesting to design algorithms that require only convex optimizations but are still poly(S)-free. Thank you for the suggestion regarding non-convex optimization for poly(S)-free regret. We will clarify this in the introduction of our final version.
**Regarding Question 2**: *Unbounded self-concordant GLMs*.
Indeed, it does seem that our analysis extends to unbounded, self-concordant GLMs as well. However, we expect to incur additional log factors ($\log{T}$) in the regret. Taking Gaussian rewards as an example, the confidence interval would remain unchanged. We use the upper bound on the reward ($R$) in several places during the analysis (eg., in Lemma A.5) or in the algorithm (while defining $\beta(x)$) – similar arguments can be made for Gaussian rewards as well. The analysis for Gaussian rewards follows from the fact that Gaussian random variables remain bounded with high probability, allowing for extensions of our analysis.
While such extensions are interesting, the current work focuses on the GLM model proposed in [8] and addresses the challenges in the limited adaptivity setting.
**Regarding Question 3**: *Regarding l2-penalized MLE*.
We have intentionally separated the convex optimization part and the (non-convex) projection step. This separation ensures that certain properties can be obtained before and after the projection; see, e.g., equation (26), which would not hold if we included the projection into the considered convex optimization problem. Moreover, it is not clear how the non-convex optimization projection can be included as a constraint in the log-loss optimization, while ensuring that the desired properties hold. Appendix E provides additional details in this direction.
**Regarding Question 4**: *Geometry-dependent norm for warm-up*.
We thank the reviewer for this helpful suggestion. We will include this in the updated version of the paper. Indeed, while the worst-case regret guarantee remains unchanged, we can use the geometry-dependent norm during warmup. Also, the algorithms analysis remains essentially the same.
**Regarding Question 5**: *Ellipsoidal projection*.
The reason why restricting optimization (6) to the ellipsoid around $\theta_o$ leads to tighter regret is because for the Non-”Switching Criterion I” rounds, it is guaranteed that $\lVert x \rVert_{V^{-1}} \leq O(1/\sqrt{\kappa})$. Hence $\langle x, \theta^* - \theta_o \rangle \leq ||x||_{V^{-1}} || \theta^* - \theta_o ||_V \leq O(1/\sqrt{\kappa}) \gamma \sqrt{\kappa}$. Therefore, the main idea is in the design of the Switching Criterion I and not just in the ellipsoidal projection. Whether one can combine this Switching Criterion with existing algorithms and obtain better guarantees is an interesting direction worth exploring.
**Regarding Question 6**: *Best arm identification*.
This is an interesting question that complements the paper’s goals. It is, however, worth noting that the current warm-up conditions are designed to address changing contexts. Applying them to static arm sets may be suboptimal. For static arm sets, one can simply allocate initial rounds for warmup, as in [7]. The switching criteria-based warmup is needed in the current work because we are dealing with adversarial contexts. Overall, it remains unclear as to what one would gain by using the criteria for static arm sets.
Note: All reference numbers are same as in the submitted paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. They cleared up most of my concerns, and I'll be keeping my score and advocating for acceptance.
One more follow-up question to Question 4:
- so... using $H$-norm doesn't help with the analysis? I expected that as $H$ and $V$ may differ by a factor of $\kappa$, using $H$-based warmup would help significantly, e.g., [7] and Mason et al. (2022) ([7] in my above response). Can the authors elaborate on why using $H$ doesn't lead to any significant improvement? Moreover, *definitely not for the current rebuttal*, but I would be curious to see if there are any numerical differences in using $H$-based warmups.
Lastly, one additional suggestion: please consider including a table of contents.
---
Reply to Comment 1.1.1:
Comment: Thank you for the relevant suggestions and for your support in advocating for acceptance. We will include a table of contents in the updated version.
Regarding the use of the $H$-norm in warmup, the $\kappa$ factor in the lower-order term of the regret during the warm-up rounds arises from our approach of lower-bounding the matrix $H^* = \sum_t \dot{\mu}(x_t^T \theta^*) x_t x_t^T $ by $H^* \succeq \frac{V}{\kappa}$, where $V = \sum_t x_t x_t^T$ (similar to Jun et al. 2021 or [7] for burn-in phase). Given $\lVert \theta^* \rVert \leq S$, one might consider a tighter bound, such as $H^* \succeq \sum_t \dot{\mu}(\lVert x_t \rVert S) x_t x_t^T$ (referred to as $H^\text{naive}$ in Mason et al. 2022). However, this bound still incurs a $\kappa$ factor in the worst-case for certain arm sets, e.g. when all arms have the same length ($\lVert x \rVert = 1 \, \forall x \in \mathcal{X}$).
That said, for arm sets where $\lVert x \rVert$ varies significantly, using $H^\text{naive}$ may improve empirical performance, making it a worthwhile direction for future exploration. We will incorporate this idea from Jun et al. 2021 and Mason et al. 2022 in the updated version. | Summary: This paper considered regret minimization for a generalized linear reward model with limited adaptivity, in which the set of arms $\mathcal{X}t$ is stochastically generated by unknown distribution $\mathcal{D}$, and after pulling $x_t \in \mathcal{X}_t$ the learner receives a reward $r_t$ sampled from the GLM distribution $P(r|x_t)$ with unknown $\theta^*$.
In the first setting M1, the algorithm is given a budget $M$ and is asked to decide upfront $M$ rounds to update its policy. For M1, B-GlinCB is proposed, whose regret depends on $\sqrt{\hat{\kappa}^{-1}}$ or $\sqrt{{\kappa^*}^{-1}}$. In the second setting M2, the algorithm is given a budget $M$ and needs to decide $M$ rounds to update its policy adaptively.
For RS-GlinCB, the authors provided regret bound where $\kappa$-dependence only appears in $\log^2T$ term. Experimental results are demonstrated to validate their algorithms.
Strengths: The first attempt to study GKM with limited adaptivity. The algorithm removes the instance^dependent non-linearity parameter $\kappa$ which can be exponentially large concerning $|| \theta^*||$.
Weaknesses: 1. To compute the length of the warm-up batch in M1, the knowledge of $\kappa$ which depends on the optimal parameter $\theta^*$ is required. Also, UCB/LCB scores and $\beta(x)$ also need this knowledge. In M2 setting, Switching Criterion I depends on $\kappa$. In practice, how can we estimate $\kappa$ parameterized by unknown $\theta^*$?
2. The reviewer could not understand how significant the benefit of removing $\kappa$dependence compared with other $\sqrt{\hat{\kappa}^{-1}}$ or $\sqrt{{\kappa^*}^{-1}}$ is while allowing the policy to update at most $O(\kappa \log^2 T)$ times that is $\kappa$-dependent. Only when $\kappa =o(\log T)$, the amount of adaptivity is reasonable, but now the regret bound does not need to care about $\kappa$-dependence in such cases. Conversely, when $\kappa$ is large, removing such dependence from the regret bound is important but now the algorithm requires a large amount of updates. Could authors discuss the benefits/trade-offs of this point more?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see questions in Weaknesses.
Typo/minor comments:
$x^*$ is dependent on arm set $\mathcal{X}$, so other notation may be clearer such as $x^*_t$ for $\mathcal{X}_t$.
Since the def. of $\kappa$ in Alg 1 and Alg 2 is different, why not introduce different notations?
How the scaling parameter $\beta(x)$ is defined in (2)? Is this same value defined later in (3)?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. We address the comments and questions below:
**Regarding Weakness 1**.
*(1a) Unknown $\kappa$*.
It is relevant to note that an upper bound on $\kappa$ suffices for the mentioned use cases. $\kappa$ is an instance-dependent parameter that captures the non-linearity and quantifies the hardness of the GLM instance.
There is no cyclic dependency here between estimating $\theta^*$ and $\kappa$. In particular, it is standard in bandit literature (specifically, linear and generalized linear bandits) to assume an upper bound on $\theta^*$. Such an upper bound directly translates into an applicable upper bound for \kappa; see discussion below.
*(1b) Dependence of $\kappa$ on* $\theta^*$.
We note that the inclusion of $\theta^*$ in the definition of $\kappa$, $\left( \kappa = \max_{x \in \cup_{t=1}^T {\cal X}t} \frac{1}{\dot{\mu} ( \langle x, \theta^* \rangle )} \right)$ is beneficial. Prior works, such as [1],[2] use the following definition $$\kappa = \max_{\lVert \theta \rVert \leq S } \max_{x \in \cup_{t=1}^T {\cal X}t} \frac{1}{\dot{\mu} (\langle x, \theta \rangle)} $$ which also appears in their regret expression. Our definition is much tighter, leading to potentially smaller regret.
**Regarding Weakness 2**.
Here, it is best to not conflate $\kappa$ and $T$. $\kappa$ is an instance dependent parameter, which is potentially large, but fixed for the instance. The number of rounds, $T$, on the other hand, is a growing quantity.
**Regarding the definition of $\beta(x) and minor comments**.
Yes, the understanding is correct. We will move this definition next to the first expression in the final version.
Thank you for pointing out the minor typos. We will fix these in the updated version of the paper.
[1] Filippi, Sarah, et al. "Parametric bandits: The generalized linear case." Advances in neural information processing systems 23 (2010).
[2] Faury, Louis, et al. "Jointly efficient and optimal algorithms for logistic bandits." International Conference on Artificial Intelligence and Statistics. PMLR, 2022.
---
Rebuttal 2:
Title: Thank you for your response
Comment: Dear Authors,
Thank you for your response, in which the following points have been resolved:
- (1) In the algorithm design, the access to an upper bound is sufficient.
- (2) Other questions such as the definition of $\kappa$ and its difference from prior work.
- (3) The necessary assumptions in the paper are standard in the field of GLMs.
I found that the analysis in this paper is very well-developed, particularly the self-concordance of bounded GLMs. Therefore, I have changed my score to Accept.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for updating the score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving | Accept (poster) | Summary: The paper presents a approach to address the challenges of autonomous driving in multi-agent scenarios and heterogeneous interaction. It introduces the concept of Behavioral Topology (BeTop) and its corresponding network BeTopNet. BeTop is based on braid theory and aims to provide a topological representation of multi-agent interactions to enhance the prediction and planning of autonomous vehicles (AVs). The proposed method focuses on creating a compliant behavioral pattern among agents, which guides the trajectory generation for AVs. Extensive experiments on large-scale datasets such as nuPlan and WOMD demonstrate the good performance of BeTop in both prediction and planning tasks.
Strengths: * The paper is well-written and polished.
* The idea of using braid theory to explicit formulate the interactions among agents is interesting and lays a solid mathematical foundation for BeTop.
* The proposed method effectively integrates prediction and planning in a unified framework.
* The proposed method and baselines are extensively evaluated on two large-scale real-world datasets to demonstrate the performance on both motion prediction and planning.
* Ablation studies show that each component of the proposed method contribute to its performance on the planning task.
Weaknesses: * The proposed method infer one type of agent behavior topology from one mode of future trajectories (e.g., 8s), while the topology of multi-agent interaction for real-world autonomous driving is usually multi-modal and dynamic. A discussion of modeling multi-step and dynamic topologies of vehicles for a long horizon would be beneficial.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The proposed method infer one type of agent behavior topology from one mode of future trajectories (e.g., 8s), while the topology of multi-agent interaction for real-world autonomous driving is usually multi-modal and dynamic. A discussion of modeling multi-step and dynamic topologies of vehicles for a long horizon would be beneficial.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * The current implementation of BeTop considers only one-step future topology. Extending this to multi-step reasoning could provide more robust predictions and planning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address your questions below.
>**W1/Q1:** The proposed method infer one type of agent behavior topology from one mode of future trajectories (e.g., 8s), while the topology of multi-agent interaction for real-world autonomous driving is usually multi-modal and dynamic. A discussion of modeling multi-step and dynamic topologies of vehicles for a long horizon would be beneficial.
Thanks for your insightful suggestions. As mentioned in Lines 300-301, another challenge and direction is the development of Multi-step BeTop. In this paper, we clarify that multi-step BeTop refers to using multiple intervals to summarize entire future interactions with different temporal granularities.
To explore this, we add a *new ablation experiment* to evaluate the effect of multi-step BeTop with minimal adjustments to the current BeTopNet framework. In our study, future interactions are split into 1 (base), 2, 4, and 8 steps (intervals) for multi-step BeTop labels. The multi-step topology reasoning task is then deployed through the current BeTopNet decoder, with an expanded MLP Topo Head for output steps. A max-pooling over BeTop steps is performed to comply with the indexing for topo-guided attention. The ablation results are as follows:
|BeTop reasoning steps/intervals|mAP|minADE|minFDE|Miss Rate|Inference Latency (ms)|Training Latency (ms)| # Params. (M)|
|-----------------------------------|-----------|-----------|-----------|-----------|------------------------|-----------------------|------------|
|1/Base|0.392|0.637|1.328|0.144|70.0| 101.6|45.380|
|2|**0.394**|**0.633**|**1.325**|0.145|75.5|110.6|45.382|
|4|0.391|0.634|1.326|**0.142**|80.0|133.4|45.386|
|8|0.389|0.641|1.347|0.147|90.0|255.0|45.393|
Compared to the baseline 1-step reasoning, multi-step BeTop reasoning slightly improves BeTopNet's performance (e.g., 2-steps, +0.2 mAP), with a corresponding increase in computational costs for additional steps.
This result highlights the potential of multi-step reasoning to enhance BeTopNet in interactive scenarios. One-step BeTop performs relatively well because the current topo-guided attention is optimized for single-interval reasoning. However, the 8-step configuration shows a slight drop, which might result from the minimal adjustments for BeTopNet. Direct reasoning and Max-pool over multi-step BeTop at the topo-attention may not predict and capture multi-interval interactions effectively, leading to potential noise or information loss.
The current approach focuses primarily on formulating and integrating BeTop into the integrated prediction and planning (IPP) tasks, but refining temporal granularity for more accurate and efficient interactions remains an open question. We believe how to effectively leverage multi-step BeTop represents an interesting area for future exploration. We will enrich the experiment and discussion above in the revision accordingly.
---
Rebuttal 2:
Comment: I thank the authors for the extra results and clarification and I would raise my score.
---
Rebuttal Comment 2.1:
Title: Response to the Reviewer
Comment: Thanks for the feedback and raising the score! We do appreciate your helpful review and will update the paper accordingly. | Summary: The paper introduces a novel approach to enhance the safety and social consistency of autonomous driving systems through improved multi-agent behavioral integration. To address inefficiencies and inconsistencies in current behavioral representations, the authors propose Behavioral Topology (BeTop), a topological framework derived from braid theory that captures consensual behavioral patterns among multiple agents. This framework guides downstream trajectory generations and ensures stable collective behavior when integrating prediction and planning. The paper also presents BeTopNet, a synergistic learning framework supervised by BeTop that manages behavioral uncertainty and enhances prediction and planning consistency. Extensive experiments on large-scale real-world datasets, including nuPlan and WOMD, demonstrate that BeTop achieves state-of-the-art performance in prediction and planning tasks, showcasing its effectiveness in interactive scenarios
Strengths: The paper introduces a novel approach to enhance the safety and social consistency of autonomous driving systems through improved multi-agent behavioral integration. To address inefficiencies and inconsistencies in current behavioral representations, the authors propose Behavioral Topology (BeTop), a topological framework derived from braid theory that captures consensual behavioral patterns among multiple agents. This framework guides downstream trajectory generations and ensures stable collective behavior when integrating prediction and planning. The paper also presents BeTopNet, a synergistic learning framework supervised by BeTop that manages behavioral uncertainty and enhances prediction and planning consistency. Extensive experiments on large-scale real-world datasets, including nuPlan and WOMD, demonstrate that BeTop achieves state-of-the-art performance in prediction and planning tasks, showcasing its effectiveness in interactive scenarios
Weaknesses: In general, the paper is well written and there are no major weakness, however, there are some aspects that can be further discussed in the paper: 1.While the paper demonstrates effectiveness on specific datasets, it remains uncertain how well the method generalizes to diverse driving environments and conditions not covered in the training data. 2. The computational overhead associated with the topological framework and synergistic learning might be higher compared to simpler models, possibly affecting real-time performance.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. It is unclear how is BeTop used during inference when future trajectories for surrounding agents are unavailable.
2. It is unclear how are $Q_R$ and $Q_A$ initialized and defined.
3. How do you ensure the robustness of BeTopNet in highly dynamic and unpredictable driving environments, such as those with sudden changes or unexpected behaviors from other agents?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our work. We address your questions below.
> **W1:** While the paper demonstrates effectiveness on specific datasets, it remains uncertain how well the method generalizes to diverse driving environments and conditions not covered in the training data.
Thank you for your question. As mentioned in Lines 233-234 and Lines 239-240, we evaluate the performance of our method on both prediction and planning tasks using independent test sets, specifically Test14 and WOMD Test, as detailed in Tables 2-5 of the main text. These test sets are all private testing scenarios against the coverage in training data, ensuring that our method is evaluated objectively.
Furthermore, as noted in Lines 708-712, the validation sets in nuPlan may share certain scenarios with the training set. Therefore we argue that the widely-used Val14 benchmark may not be fully representative, and have thus placed the Val14 results in the appendix (Table 9). In the main text, we focus on reporting test results for a more objective assessment of our method's generalizability.
> **W2:** The computational overhead associated with the topological framework and synergistic learning might be higher compared to simpler models, possibly affecting real-time performance.
Agreed. This may potentially lie in computations with model scale and decoded agents, which are the common challenges for the learning-based predictors with multi-agent settings. A preliminary computation study is conducted for BeTopNet against the MTR baseline using a single A100 GPU:
| Method| Latency (ms) | GPU Memory (G) |
|-|-|-|
| MTR [22]| 84|5.2|
| BeTopNet (Ours) | 89| 6.5 |
We can observe that BeTopNet reports comparable latency and memory costs compared to MTR, with better prediction accuracy shown in the paper. The similar latency is due to the topo-guided attention, which reduces the KV features in agent aggregation during decoding, thereby decreasing the main computation cost for multi-head attention tensors. While BeTop introduces extra computations for reasoning, it requires more GPU memory for cached topology tensors. In practice, these computational challenges might be addressed through knowledge distillation or half-precision computations to reduce GPU requirements. We will enrich the above discussion in our revised version.
> **Q1:** It is unclear how is BeTop used during inference when future trajectories for surrounding agents are unavailable.
BeTop serves as a label to supervise the reasoning process in BeTopNet. During both training and inference, the reasoned BeTop is used within BeTopNet to guide predictions. This can be referred to Figure 3, Reason heads in Lines 196-197, and Training loss in Lines 205-206. We will improve the content accordingly for better understanding.
> **Q2:** It is unclear how are $𝑄_𝑅$ and $𝑄_𝐴$ initialized and defined.
$\mathbf{Q}_A$ is initialized by learnable embeddings. For the prediction task, the embedding will also be added with anchored features. These anchored features are predefined by K-means end-point anchors, referred to the clustering process in MTR [22].
$\mathbf{Q}_R$ is initialized by MLP encoding of relative features of $\mathbf{S}_R$ [63].
More details can be referred to Lines 637-639 in the Appendix and references [22, 63]. We will enrich extra clarifications in the revised main context accordingly.
> **Q3:** How do you ensure the robustness of BeTopNet in highly dynamic and unpredictable driving environments, such as those with sudden changes or unexpected behaviors from other agents?
Thank you for the insightful question. Our method has tried to tackle and validate these challenges as follows:
- **Methodology**: BeTopNet ensures interactive robustness through its synergistic decoder design and topo-guided attention, which iteratively refines predictions by focusing on potential future interactions (by reasoned BeTop), and reacts to possible changes in the next layer of decoding. For prediction/planning output, we use a Gaussian Mixture Model (GMM) to account for dynamic uncertainty. The contingency learning paradigm formulates predictions and planning guidance that balance short-term safety and long-term compliance. Expressely, this paradigm enhances short-term (0-3s) safe planning by considering worst-case predictions and conducting long-term branching (3-8s) for various prediction uncertainty.
- **Experiments**: BeTopNet's robustness can be demonstrated in Test14-Hard and Test14-Inter benchmarks. Test14-Hard highlights scenarios where rule-based planning agents perform poorly, indicating difficult environments, while Test14-Inter focuses on highly dynamic situations where physics-based planners struggle. Our strong results in these benchmarks verify BeTopNet's capability to tackle such challenging scenarios.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification and additional information. I will remain my score.
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer
Comment: Thank you for the response and recognition! We appreciate your valuable review for improving our work. | Summary: The paper addresses the challenges of autonomous driving by integrating behavior among interactive agents, specifically focusing on issues caused by multi-agent scene uncertainty and heterogeneous interactions. To tackle this, the paper introduces a topological formation called Behavioral Topology (BeTop), derived from braid theory, to represent consensual behavioral patterns among multiple agents. This formulation guides downstream trajectory generations and enhances the consistency of behavior prediction and planning.
Strengths: 1. The experimental results seem to support the authors' claims.
2. It is developed based on exiting braids topology. Novel method for an existing problem setup.
3. The use of braid theory to distill compliant interactive topology from multi-agent future trajectories seems a good and intuitive idea to me.
Weaknesses: The paper is generally not well-written, with extensive use of ChatGPT leading to paragraphs that are hard to follow.
- As far as I understand, the paper uses braid topology for just one step in planning and prediction. In long-horizon planning and prediction, this may not be sufficient as the motion of vehicles is no longer independent. Could you provide some basis to the idea why one step topology will be enough?
- There has been prior work using braid topology for planning (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9812118). Benchmarking your method against this prior work would provide a clearer picture of this method.
- The method appears very similar to existing methods like Wayformer, with the main difference being the use of braid topology. I would like to see a detailed comparison showing how much the encoding of braid topology improves performance compared to Wayformer, especially given that only one-step braid topology is used instead of long-horizon topology.
Overall, the paper has potential, and I would be happy to discuss more with you. I would be willing to increase my score if my questions and concerns are addressed satisfactorily.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions and we really appreciate your comments. We have carefully revised the draft for better readibility. We address each of the questions and confusion on the weaknesses as follows.
>**W1:** As far as I understand, the paper uses braid topology for just one step. Could you provide some basis to the idea why one step topology will be enough?
Thank you for the question. We would like to clarify that "one step" in the BeTop label refers to an interval summarizing interactions across the entire future horizon (T=8s, 10Hz), rather than a single timestep, illustrated in Lines 143-145. As detailed in Line 144, $e_{ij}=1$ signifies any future interaction between agents $i$ and $j$, while $e_{ij}=0$ indicates no interaction across all timesteps. Variations in BeTop labeling capture interactions over the full horizon with different temporal granularity.
As mentioned in Lines 300-301, another challenge and direction for scalability is the development of Multi-step BeTop. In this paper, we clarify that multi-step BeTop refers to using multiple intervals to summarize entire future interactions with different temporal granularities.
**We refer to the multi-step results and discussions in the Global Rebuttal above.**
>**W2:** There has been prior work using braid topology for planning (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9812118). Benchmark would provide a clearer picture of this method.
Thank you for your question. The work in reference [43] has been cited in the paper, but we understand that it may cause some confusion. The differences of our work from [43] are as follows:
- The paper [43] *does not involve planning tasks*, but rather focuses on traffic scenario analysis using topological braids to calculate an Interaction Score, as in Lines 88-90. In contrast, BeTop formulates behavioral topology and integrates it into joint prediction and planning using BeTopNet.
- In terms of formulation, [43] converts braid interactions into braid words, which is suitable for calculating the TC index but challenging for reasoning due to exponential complexity. While BeTop uses topological representations to simplify the formulation process, allows for reasoning and planning tasks with quadratic complexity.
- While [43] conducts case studies on real-world datasets using log-replayed trajectories, BeTopNet includes reasoning in both prediction and planning benchmarks. This distinction highlights how BeTop goes beyond simple quantification to actively engage in prediction and planning processes.
We will enrich these discussions to provide further clarity.
>**W3:** I would like to see a detailed comparison showing how much the encoding of braid topology improves performance compared to Wayformer, especially given that only one-step braid topology is used instead of long-horizon topology.
Thank you for your question. We understand that the similarities in the encoder-decoder structure might cause some confusion, so we we'd like to firstly highlight a few key differences between BeTopNet and Wayformer [61]. In the meantime, we conduct *additional ablation studies* to demonstrate BeTop's effectiveness in the Wayformer framework.
**Key Differences:**
- **Model Foundations**: As mentioned in Section 3.2 in the main paper and Section C.1 in the Appendix, BeTopNet follows the encoder and anchor-based decoder design of Motion Transformer (MTR) [22]. While Wayformer focuses on encoder novelty, BeTopNet centers on decoder design, employing a synergistic Transformer decoder that jointly decodes trajectories and reasons BeTop interactions. This is contributed by topo-guided attention, which leverages reasoned topology for compliant predictions.
- **Encoding**: BeTopNet's encoder is based on MTR and uses local self-attention to encode all scene agents. Wayformer handles surrounded agents to avoid out-of-memory issues.
- **Decoding Strategies**: BeTopNet uses iterative refining strategies to selectively aggregate encoded features and decode trajectories for each decoder layer. In contrast, Wayformer uses stacked vanilla Transformer decoders for one-shot trajectory output.
- **Attention Design**: Wayformer’s latent query attention focuses on encoding query reduction, similar to MTR's local attention. BeTopNet uses guided attention for cross-attention in decoder, with reduction based on sorted BeTop results supervised by BeTop labels.
Given differences, our studies primarily compare BeTopNet against MTR . In the rebuttal, we conduct additional ablations with BeTop in Wayformer to understand the generalization under different model foundations.
**Ablative Experiments:**
- **Wayformer+BeTop**: The impact of using BeTop as additional supervision for interactions within the vanilla Wayformer model.
- **Wayformer+BeTopNet**: This combines the Wayformer encoder with the BeTopNet decoder design, showcasing our key contribution of topo-guided attention and iterative reasoning for complex interactions.
The ablations use protocols outlined in Lines 274-277 with end-to-end decoding approach (Modality=6). We use the reproduction from (https://github.com/vita-epfl/UniTraj), since Wayformer is not open-sourced. The results are as follows:
|Method|mAP|minADE|minFDE|Miss Rate|
|-|-|-|-|-|
|Wayformer|0.281|0.661|1.417|0.199|
|Wayformer+BeTop|0.290|0.637|1.364|0.178|
|**Wayformer+BeTopNet**|**0.344**|**0.604**|**1.261**|**0.166**|
Compared to the vanilla Wayformer, incorporating BeTop as supervision improves performance with a -6.2% Miss Rate and +3.2% mAP. Furthermore, integrating BeTopNet significantly boosts performance, achieving a +18.6% mAP and -7.2% Miss Rate. This enhancement is due to our ***synergistic decoder***, which uses iterative BeTop reasoning and Topo-guided attention to refine trajectories by selectively aggregating interactive features. These results demonstrate BeTopNet's superior ability to enhance prediction and planning.
---
Rebuttal Comment 1.1:
Title: Thank for your rebuttal.
Comment: Hi Authors,
Thank you for the clarification and doing experiments within this short time. I am increasing my score. All the best!
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer
Comment: Thanks for your feedback and raising the score! We will integrate your insightful comments in our revision accordingly. | Summary: This paper introduces a new approach, called Behavioral Topology (BeTop), to address the challenges in modeling multi-agent behaviors in autonomous driving. By utilizing braid theory, BeTop explicitly represents the consensual behavioral patterns among multiple agents, facilitating better prediction and planning. The framework, BeTopNet, incorporates this topological reasoning into a synergistic learning model that guides both behavioral prediction and planning. Good experiments on large-scale datasets, such as nuPlan and WOMD, demonstrate the superior performance of BeTopNet in both prediction and planning tasks, showcasing significant improvements over existing methods.
Strengths: Good Presentation: The paper is well-organized and clearly presents the motivation, methodology, and results. The introduction of BeTop is logically structured, and the figures help in understanding the complex concepts.
Reasonable Formulation: The use of braid theory to represent multi-agent interactions is innovative and provides a solid theoretical foundation. This formulation helps in capturing the interactive behaviors more effectively compared to traditional dense or sparse representations.
Extensive Experiments: The authors have conducted comprehensive experiments on large-scale real-world datasets. These experiments cover both prediction and planning tasks, providing a thorough evaluation of the proposed method.
Performance Improvement: The experimental results demonstrate that BeTopNet achieves improved performance in prediction and planning tasks, especially in planning scores and prediction accuracy, with detailed metrics provided to back these claims.
Weaknesses: Lack of Discussion on Multi-Agent Settings: While the paper introduces a topological approach for multi-agent behavior modeling, it lacks an in-depth discussion on how this method scales and handles various multi-agent settings. More insights into the limitations and potential scalability issues would strengthen the paper.
Formulation for Multi-Agent Settings: The paper could benefit from a more detailed formulation of the multi-agent setting. While the braid theory is used to model interactions, a clearer and more comprehensive explanation of how this integrates with different numbers and types of agents would be helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: Lack of Discussion on Recursive Settings: While the paper introduces a topological approach for multi-agent behavior modeling, it lacks an in-depth discussion on how this method scales and handles various steps of multi-agent settings. More insights into the limitations and potential scalability issues would strengthen the paper. While the braid theory is used to model interactions, a clearer and more comprehensive explanation of how this integrates with different numbers and types of agents would be helpful.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your appreciation and the helpful review of our work. We address your concerns below.
>**W1/Q1:** Lack of Discussion on Multi-Agent Settings: While the paper introduces a topological approach for multi-agent behavior modeling, it lacks an in-depth discussion on how this method scales and handles various steps of multi-agent settings. More insights into the limitations and potential scalability issues would strengthen the paper.
1. **How does this method scale:**
- **Scaling model size:** In the rebuttal, we *additionally* ablate BeTopNet with different model scales. We adjust the number of decoding modalities and feature dimensions to get three sizes of models. The results are as follows and we have added the experiment in the revision.
| Method| Description|mAP|Miss Rate|Latency (ms)|# Params. (M)|
|-|-|-|-|-|-|
| BeTopNet-Small|Mode=6, DecDim=256|0.391|0.131| 45|28.91|
| BeTopNet-Medium|Mode=64, DecDim=256|0.437|0.119|65|28.91|
| BeTopNet-Base|Mode=64, DecDim=512|**0.442**|**0.117** |70|45.38|
- **Scaling number of decoded agents:** Besides, we provide the computational complexity when the number of decoded agents increase during prediction, which is also a common challenge for the multi-agent setting. A preliminary computation study is conducted for BeTopNet against MTR:
|Method|# Decode Agents|Latency (ms)|GPU Memory (G)|
|-|-|-|-|
||
| MTR [22]|8|84|5.2|
| MTR [22]|16|123|7.1|
| MTR [22]|32|193|15.6|
||
|BeTopNet (Ours)|8|89|6.5|
|BeTopNet (Ours)|16|120|10.8|
|BeTopNet (Ours)|32|166|19.2|
We can observe that BeTopNet reports comparable latency and memory costs compared to MTR, with better prediction accuracy shown in the paper. The similar latency is due to the topo-guided attention, which reduces the KV features in agent aggregation during decoding, thereby decreasing the main computation cost for multi-head attention tensors. While BeTop introduces extra computations for reasoning, it requires more GPU memory for cached topology tensors. In practice, these computational challenges might be addressed through knowledge distillation or half-precision computations to reduce GPU requirements. We will enrich the above discussion in our revised version.
2. **How does this method handle various steps of multi-agent setting:** As mentioned in Lines 300-301, another challenge and direction for scalability is the development of Multi-step BeTop. In this paper, we clarify that multi-step BeTop refers to using multiple intervals to summarize entire future interactions with different temporal granularities.
To explore this, we add a *new ablation experiment* to evaluate the effect of multi-step BeTop with minimal adjustments to the current BeTopNet framework.
**We refer to the Multi-step results and discussions in the *Global Rebuttal* above.**
>**W2:** Formulation for Multi-Agent Settings: The paper could benefit from a more detailed formulation of the multi-agent setting. While the braid theory is used to model interactions, a clearer and more comprehensive explanation of how this integrates with different numbers and types of agents would be helpful.
Thanks for your question.
1. To have a more intuitive understanding of how BeTop deals with varied agent numbers and categories:
- For **varied agent numbers**: Each BeTop would outline whole scene agents. We leverage batched padding by the maximum scene agent number per batch, and generate the padding mask during batched BeTop formulation. Hence, BeTop can be formulated uniformly by batched padding mask.
- For **varied agent types**: Variations in agent types would not affect the formulation process. BeTop is only defined through states $(x, y, \theta)$ from multi-agent future trajectories. In fact, the agent types are considered in multi-agent states $\mathbf{X}$ and queries $\mathbf{Q}_A$ as model inputs.
2. The detailed formulation of BeTop (Lines 127-146) is summarized for the multi-agent future interactions.
**The inputs :**
- (1) Multi-agent future trajectories $\mathbf{Y}$ of states $(x, y, \theta)$; Tensor shape: $[N_a, T_f, 3]$. (We omit subscript $a$ and $f$ below for simplicity.)
- (2) Agent padding mask $M$; Tensor shape: $[N]$
**The formulation process:**
Loop for index $i$, $j$ from 1 to $N$;
- (1) Calculate $e_{ij}$ by firstly index the future state tensor of sourced agent $\mathbf{Y}[i]$ and targeted agent $\mathbf{Y}[j]$; Both tensors have tensor shape $[T, 3]$;
- (2) Conduct the LocalTransformation() function by agent $i$ for sourced agent $\mathbf{Y}[i]$ and targeted agent $\mathbf{Y}[j]$. This is linked to the Braid function mapping by $(f^i_i, f^i_j)$ in Line 132-134, and refers to the process in Line 143-144; Both tensors output shape $[T]$ for lateral states;
- (3) Passed through the `SegmentIntersect()` function [76] for locally transformed $\mathbf{Y}[i]$ and $\mathbf{Y}[j]$ referred to the function `I(.,.)` in Line 144 with shape $[T]$; Conduct the max-pooling across $T$ for the finalized $e_{ij}$ summarizing the future interactions as one-step (one-interval)
- (4) Let $\mathcal{E}\_{ij} = e\_{ij}$
Expand the agent padding mask $MM^T$ ( tensor shape: $[N, N]$), and mask BeTop $\mathcal{E}$
**Output:** masked BeTop $\mathcal{E}$ for varied agents; Tensor shape: $[N, N]$
In practice, the formulation process is conducted in batches and calculated using parallelized tensor multiplications instead of the loop iterations for more efficient computation.
We hope the detailed formulations will aid in a better understanding of the multi-agent settings.
---
Rebuttal 2:
Title: Response
Comment: Thanks for the detailed response. I do not have other questions, though some of my concerns remain. But in general, I see fit to accept this work.
---
Rebuttal Comment 2.1:
Title: Response to the Reviewer
Comment: Thank you for the kind feedback. Your time and effort in reviewing our work are truly appreciated! We will revise the manuscript according to your valuable comments. During the remaining author-reviewer discussion period, we would be glad to provide further clarifications for any concerns you may have. | Rebuttal 1:
Rebuttal: *Dear Area Chairs and Reviewers,*
*We thank all the Reviewers for their careful reviews and valuable comments on our work. We have taken each comment into consideration, added more ablative experiments in the rebuttal, and clarified some technical details. Please see each response below. We are grateful for the opportunity to improve our work with your guidance.*
*Best Regards,*
*The Authors*
**Here we refer to the general question proposed by #Rew.PWLC and #Rew.M7XC:**
**Ablations and discussions for multi-step:** As mentioned in Lines 300-301, another challenge and direction for scalability is the development of Multi-step BeTop. In this paper, we clarify that multi-step BeTop refers to using multiple intervals to summarize entire future interactions with different temporal granularities.
To explore this, we add a *new ablation experiment* to evaluate the effect of multi-step BeTop with minimal adjustments to the current BeTopNet framework. In our study, future interactions are split into 1 (base), 2, 4, and 8 steps (intervals) for multi-step BeTop labels. The multi-step topology reasoning task is then deployed through the current BeTopNet decoder, with an expanded MLP Topo Head for output steps. A max-pooling over BeTop steps is performed to comply with the indexing for topo-guided attention following the current design in Line 144. The ablation results are as follows:
|BeTop reasoning steps/intervals|mAP|minADE|minFDE|Miss Rate|Inference Latency (ms)|Training Latency (ms)| # Params. (M)|
|-|-|-|-|-|-|-|-|
|1/Base|0.392|0.637|1.328|0.144|70.0| 101.6|45.380|
|2|**0.394**|**0.633**|**1.325**|0.145|75.5|110.6|45.382|
|4|0.391|0.634|1.326|**0.142**|80.0|133.4|45.386|
|8|0.389|0.641|1.347|0.147|90.0|255.0|45.393|
Compared to the baseline 1-step reasoning, multi-step BeTop reasoning slightly improves BeTopNet's performance (e.g., 2-steps, +0.2 mAP), with a corresponding increase in computational costs for additional steps.
This result highlights the potential of multi-step reasoning to enhance BeTopNet in interactive scenarios. One-step BeTop performs relatively well because the current topo-guided attention is optimized for single-interval reasoning. However, the 8-step configuration shows a slight drop, which might result from the minimal adjustments for BeTopNet. Direct reasoning and Max-pool over multi-step BeTop at the topo-attention may not predict and capture multi-interval interactions effectively, leading to potential noise or information loss.
The current approach focuses primarily on formulating and integrating BeTop into the integrated prediction and planning (IPP) tasks, but refining temporal granularity for more accurate and efficient interactions remains an open question. We believe how to effectively leverage multi-step BeTop represents an interesting area for future exploration. We will enrich the experiment and discussion above in the revision.
*Please refer to the rebuttal modules below for our point-to-point responses to each reviewer.* | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AutoTimes: Autoregressive Time Series Forecasters via Large Language Models | Accept (poster) | Summary: This paper argues that existing LLM based time series models have not fully exploited the inherent autoregressive property and the decoder-only architecture of LLMs. To address this problem, this paper introduces a novel AutoTimes model, which exploits the autoregressive property of LLMs. Experimental results demonstrate that AutoTimes could outperform the baseline methods.
Strengths: 1. This paper exploits the autoregressive property of the LLMs.
2. Comprehensive experiments are conducted to demonstrate the effectiveness of the proposed AutoTimes, including forecasting, zero-shot forecasting and in-context forecasting.
3. This paper is well-written, and is easy-to-follow.
Weaknesses: 1. Autoregression seems a little bit trivial.
2. It is not quite clear how much improvements can be brought by auto-regressively generating time series. Either direct comparison or theoretical analysis should be provided.
3. It is difficult to directly draw conclusions about the scaling behaviors from Table 5. The authors draw conclusions by comparing results of OPT-x models, rather than comparing different models e.g., compare OPT with LLaMA or GPT-2.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer k2uw
Many thanks to Reviewer k2uw for providing a valuable review.
**Q1**: Reclarify the contributions of the proposed method.
The reviewer mentioned that "the paper exploits the autoregressive property of the LLMs". We agree with this argument but also would like to highlight detailed contributions of our work:
* We delve for **the first time into the autoregression in LLM4TS methods**. It addresses the raising concerns about the effectiveness of prevalent non-autoregressive LLM4TS methods (refer to $\underline{\text{Q1 of Reviewer 1PWH}}$).
* We proposed the one-for-all benchmark that **breaks the status quo of respective training on specific lookback-forecast length** (appreciated by $\underline{\text{Reviewer EAH7}}$), which is an essential step toward foundation models.
* Our method **achieves state-of-the-art forecasting performance** and requires **significantly fewer tunable parameters among advanced LLM4TS methods**. The method's efficiency is acknowledged by all other reviewers.
* We present the concept of **in-context forecasting for the first time**, where time series prompts and prompt engineering are closely researched (refer to $\underline{\text{Q3 of Reviewer acJd}}$).
* Beyond adopting LLMs on end-to-end forecasting, we facilitate the full capabilities of LLMs for time series, such as iterative multistep generation, zero-shot generalization, and scaling behavior.
**Q2**: How much improvement is brought by autoregressively generating time series?
To address your concern, we provide a comprehensive ablation study: AutoTimes (FlattenHead) replaces the original segment-wise projection layer by (Flatten + linear head) of PatchTST, the prevalent module in non-autoregressive models. Here are the results:
|ETTh1 (MSE\|MAE)|AutoTimes (Original)|AutoTimes (FlattenHead)|
|-|-|-|
|Pred-96|**0.360**\|**0.400**|0.385\|0.420|
|Pred-192|**0.388**\|**0.419**|0.445\|0.463|
|Pred-336|**0.401**\|**0.429**|0.463\|0.475|
|Pred-720|**0.406**\|**0.440**|0.574\|0.542|
|ECL (MSE\|MAE)|AutoTimes (Original)|AutoTimes (FlattenHead)|
|-|-|-|
|Pred-96|**0.129**\|**0.225**|0.142\|0.247|
|Pred-192|**0.147**\|**0.241**|0.157\|0.259|
|Pred-336|**0.162**\|**0.258**|0.201\|0.311|
|Pred-720|**0.199**\|**0.288**|0.232\|0.331|
|Weather (MSE\|MAE)|AutoTimes (Original)|AutoTimes (FlattenHead)|
|-|-|-|
|Pred-96|**0.153**\|**0.203**|0.155\|0.209|
|Pred-192|**0.201**\|**0.250**|0.202\|0.251|
|Pred-336|**0.256**\|**0.293**|0.257\|0.286|
|Pred-720|**0.331**\|**0.345**|0.333\|0.349|
|Traffic (MSE\|MAE)|AutoTimes (Original)|AutoTimes (FlattenHead)|
|-|-|-|
|Pred-96|**0.343**\|**0.248**|0.367\|0.261|
|Pred-192|**0.362**\|**0.257**|0.391\|0.282|
|Pred-336|**0.379**\|**0.266**|0.404\|0.287|
|Pred-720|**0.413**\|**0.284**|0.432\|0.294|
As shown in the above tables, the performance of non-autoregressively generation is consistently worse than original AutoTimes. In addition to empirical results, we'd like to provide an analysis of the two approaches:
* **One model fits all lengths:** While most deep forecasters have to be trained and applied to specific length settings, autoregressive models are more feasible in variable-length scenarios. We provide the comparison as follows:
|Comparison|Non-autoregressive|Autoregressive|
|-|-|-|
|Training|Trained with specific lookback-forecast lengths|Trained with the context length with **each generated token being supervised**|
|One-step Forecasting|Applicable only on fixed lookback-forecast lengths|Flexible on scenarios **less than the context length** like large language models|
|Rolling Forecasting|Has to drop the lookback series because of the fixed input length|**Can prolong the lookback horizon** until the total length exceeds the context length|
* **Fewer parameters for training:** Supposing $L$ is the lookback length, $F$ is the forecast length, $S$ is the segment (token) length and $N$ is the segment (token) number. We count the parameters for embedding and projection:
* Non-autoregression: time series segment ($S$) -> representation ($D$) -> flattened ($N\times D$)-> future time series ($F$)
* Autoregression: time series segment ($S$) -> representation ($D$) -> next time series segment ($S$)
In non-autoregressive models, all the tokens of the lookback series **are flattened and projected to future time series**, which leads to costly parameters of $ND\times F$. Instead, the projection of AutoTimes is independently conducted on segments, leading to fewer introduced parameters $D\times S$.
* **Consistent with the utilization of LLMs:** the main claim of our paper is that non-autoregressive LLM4TS leads to inconsistencies, where inherently GPT-style models are fine-tuned in the BERT-style. Instead, we suppose the token transition of LLMs is general-purpose and find it transferable as the transition of time series segments. Consequently, the powerful generation ability of LLMs can be naturally inherited.
**Q3**: Concern about the scaling behaviors of LLM-based forecasters.
Thank you for your feedback regarding the conclusions drawn from Table 5. It is recommended to see the scaling behavior in $\underline{\text{Figure 4 of the main text}}$, where the scaling behavior is not only observed in OPT-x models but also revealed among GPT-2, OPT, and LLaMA.
We further eliminate the influence of parameter count in trainable layers. As shown in the following table, a larger LLaMA-7B with fewer trainable parameters can still achieve better performance compared to OPT (1.3B~6.7B), demonstrating the performance stems from the scaling of LLMs, not simply the parameter of trainable layers.
|Datasets|GPT-2|OPT-350M|OPT-1.3B|OPT-2.7B|OPT-6.7B|LLaMA-7B|
|-|-|-|-|-|-|-|
|Hidden Dimension|768|1024|2048|2560|4096|4096|
|Embedding layer|2-layer MLP|2-layer MLP|2-layer MLP|2-layer MLP|2-layer MLP|nn.Linear|
|Trainable Parameters (M)|0.44|0.58|1.10|1.36|2.15|0.79|
|Performance (Avg. MSE)|0.397|0.401|0.396|0.394|0.394|0.389|
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
After reading all the comments and responses, my concerns are mostly addressed. I'm happy to raise my scores.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response and Raising the Score
Comment: Thanks again for your response and raising the score to 7! In the final version, we will elaborate more on clarifying contributions and novelty, and include the additional experiments to the paper.
---
Rebuttal 2:
Title: Eagerly Await Your Response
Comment: Dear Reviewer k2uw,
Thanks again for your valuable and constructive review. Would you please let us know if your concerns about autoregression effectiveness and scaling behavior are resolved? **Since the Author-Reviewer discussion period is ending in two days, we eagerly await your response.**
Till now, we find that your rating is still "reject". In a very worried mood, we respectfully remind you that **we have comprehensively evaluated the improvement of autoregression and validated the scaling effect on different LLMs categories, which should help you better assess our work.** Also, **we have clarified the contributions of the method and paradigm renovation, and how they differ from previous approaches.**
**We do hope you can consider our rebuttal and kindly let us know if our rebuttal has addressed your concern.**
Sincere thanks for your dedication! We are looking forward to your reply.
Authors
---
Rebuttal 3:
Title: We Are Looking Forward to Your Reply
Comment: Dear Reviewer k2uw,
Thank you again for your valuable and constructive review.
We kindly remind you that there is **only a half day left** before the Author-Reviewer discussion period ends. We find that your rating is still "reject", so **we are eager to receive your response to our rebuttal**. We respectfully ask if you have any further concerns. Please let us know if you have other questions about our paper. We will be more than happy to engage in more discussions and improvements to the paper.
Thank you sincerely for your dedication! We eagerly await your reply.
Authors | Summary: The authors present AutoTimes, a method that utilizes large language models (LLMs) for time series forecasting. One of the key underexplored research topics addressed by the authors is the lack of models and pre-training mechanisms that result in foundation models capable of handling lookback and forecasting horizons of arbitrary length. This is achieved by adapting the LLM forecasting framework to autoregressively forecast time series segments. Furthermore, the paper outlines techniques such as in-context learning to further improve prediction performance. Compared to previous works, the methodology requires only a small number of trainable parameters compared to previous LLM fine-tuning techniques. As far as I am aware, this work is the first to be capable of handling multimodal input and producing an autoregressive forecast in the domain of LLMs and time series.
Strengths: - The paper is well-written and easy to follow. I appreciate how the authors clearly outlined their contributions.
- The observation that non-autoregressive methods may contradict decoder-only LLMs and the shortcomings of prior methods is well-motivated. The solution proposed in the context of LLMs and time series is novel.
- The introduction of a One-for-all benchmark, which involves prediction horizons transfer learning instead of dataset transfer learning, is innovative.
- The authors provide strong motivation and experimental proof to support their claims.
- The flexibility and scalability of their method are demonstrated by successfully swapping out different LLMs with varying numbers of parameters.
- The continuous improvement over previous SOTA methods on multiple datasets, along with their thorough ablations, strengthens their claims and illustrates the method's flexibility.
Weaknesses: - I'm not a fan of your chosen color scheme. While I appreciate that you try to include colors for different entities, it is too vivid (jump from Figure 3 to 4 to 5), making the figures tough to read, especially when skimming through them quickly (which most first readers will do initially). It is not a major objection, but improving this aspect could enhance your manuscript.
- Typo:
- L21: etc [22, 42], missing dot.
- The paper makes certain simplified assumptions, such as treating time series segments independently for embedding. This might overlook complex inter-dependencies present in real-world time series data. Addressing these inter-dependencies could enhance the robustness and applicability of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **Claim of missing data for foundation models:** I am aware that there is not a plethora of datasets available, but did you look into [1] (datasets) or [2] (models)?
2. While I see the autoregressive part as a great contribution, I am not entirely convinced that the proposed embedding scheme, which only fine-tunes a small portion, brings a competitive improvement. There are multiple works [4, 5] that do not require any fine-tuning of the LLMs.
3. L106: "Unlike previous works, our proposed method regards time series itself as the instructive prompt. It avoids the modality gap caused by concatenating time series and language tokens directly." This statement gives the impression that your approach is superior. I would be interested to see how your model compares to more of these learnable-prompt methods, especially how it compares to *Test* [4] performance-wise, which appears to be superior to Time-LLM.
4. How does AutoTimes handle missing data or irregular time series intervals? Your claimed improvement over [6], which does not use language directly, has the disadvantage that you cannot simply overcome NaNs in the time series directly.
5. "Unlike previous works, our proposed method regards time series itself as the instructive prompt. It avoids the modality gap caused by concatenating time series and language tokens directly." I am a bit confused. What kind of model checkpoint are you using? Instruction tuned? An instructive prompt is a directive where the user provides explicit instructions to guide the response. I thought you were skipping the language level. Could you outline how you embed your time series and text and how exactly this is fed into the model?
---
[1] Goswami et al. "MOMENT: A Family of Open Time-series Foundation Models." ICML 2024.
[2] Ekambaram et al. "Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series" arXiv:2401.03955
[3] Chang et al. "LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters" 2024
[4] Sun et al. "TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series" ICLR 2024
[5] Wang et al. "Xinglei Wang, Meng Fang, Zichao Zeng, and Tao Cheng. Where Would I Go Next? Large Language Models as Human Mobility Predictors" 2023
[6] Gruver, Nate, et al. “Large Language Models Are Zero-Shot Time Series Forecasters.” _Advances in Neural Information Processing Systems_, 2023
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the paper provides a limitations section, I don't think it is sufficient to bury it in the appendix. During my first read, I thought the authors entirely skipped the limitations until I found it in the checklist.
Here are some specific points that I believe are missing or need improvement:
1. Although the paper claims that the embedding and projection layers only account for 0.1% of the total parameters, the scalability of these layers for very large datasets or extremely long time series is not thoroughly discussed. An evaluation of how the method performs with significantly larger datasets would strengthen the paper.
2. While the authors mention that they leave out real-world datasets for future work, I think the approach was not tested on datasets with missing values. This is especially important as their tokenization scheme seems to be based on non-overlapping patches, which are known to lose their locality assumption when missing values occur. An analysis of how the model handles missing data would provide a more comprehensive evaluation of its robustness and applicability.
In addition to that, the authors answered broader societal impacts sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer EAH7
Many thanks to Reviewer EAH7 for providing a thorough insightful review and recognizing our contributions.
**Q1**: Suggestion to improve the presentation of the paper.
Thanks for your valuable feedback regarding the color scheme and the mentioned typo. We will use a more subdued scheme and fix the typo in the revision.
**Q2**: Address inter-dependencies in real-world time series data.
We acknowledge that real-world multivariate time series often exhibit complex inter-dependencies that can significantly impact analysis. In terms of that, AutoTimes adopts Channel Independence like previous methods and further uses the timestamp as the position embedding to implicitly align different variates.
As you insightfully point out, it is necessary to explore the complex inter-dependencies, which is a hot topic in current deep time series models. It is also an essential problem for LLM4TS methods since the gap between natural language (1-D discrete token sequence) and time series (multi-dimensional continuous sequence) poses increasing challenges for LLMs to explicitly utilize the relationship between sequences.
We will explore several potential approaches: **integrating textual descriptions of variates** and **employing adaptors for variate correlating**. Your suggestion will guide us in refining our methodology.
**Q3**: Claim of missing data for foundation models and explore the scalability on larger datasets.
Thanks for your mentioned works, we are excited to see recent works advancing the development of datasets and pre-trained large models in the field of time series. We will cite them in the related work section and polish the claim.
Based on the mentioned works, we also provide an evaluation of AutoTimes on larger datasets (Time-Series Pile[1]) to address your concern about the scalability of the trainable layers:
|Perfromance of Subset (MSE\|MAE)|nn.Linear|3-layer MLP|
|-|-|-|
|ETTh1|0.724\|0.586|**0.363**\|**0.395**|
|Weather|0.288\|0.335|**0.166**\|**0.211**|
|ECL|0.856\|0.764|**0.135**\|**0.231**|
|Traffic|1.393\|0.799|**0.351**\|**0.247**|
As we delve into layer scalability, the essence of designing the embedding scheme to address heterogeneous time series is highlighted, which provides good guidance for our future research.
**Q4**: The effectiveness of fine-tuning and comparison with more LLM4TS methods.
We appreciate your concerns about the effectiveness of our fine-tuning approach, especially in light of several works that use LLMs for time series without fine-tuning. **We intend to leverage the general token transition of LLMs while tailoring it to the specific characteristics of the dataset**, which is achieved by freezing the LLM backbone and training new embedding layers only.
We provided detailed code and scripts in the $\underline{\text{supplementary material}}$ to ensure all the results are reproducible. Further, we compare the mentioned TEST[2], and AutoTimes achieves better performance on the majority of datasets.
|Datasets (MSE\|MAE)|AutoTimes|TEST|
|-|-|-|
|ETTh1|**0.389**\|**0.422**|0.414\|0.431|
|Weather|0.235\|0.273|**0.229**\|**0.271**|
|ECL|**0.159**\|**0.253**|0.162\|0.254|
|Traffic|**0.374**\|**0.264**|0.430\|0.295|
**Q5**: How does AutoTimes handle missing data or irregular time series intervals?
Thank you for your insightful question. At this stage, AutoTimes does not specifically address missing data or irregular intervals. It is the same with current works focusing on regular forecasting scenarios where time series are complete and consistently sampled.
We acknowledge that handling missing values and irregular intervals is a critical aspect of time series analysis, and we will add this as a limitation and conduct evaluations on well-acknowledged datasets in the future.
According to your suggestions, we will also consider moving the limitations section to a more prominent position within the main body of the paper to ensure that readers can easily access and engage with this critical information.
**Q6**: How does AutoTime embed and feed time series and texts?
The claim: "our proposed method regards time series itself as the instructive prompt..." refers to the following:
* As depicted in $\underline{\text{Figure 1(b)}}$, previous LLM4TS methods feed (language tokens | lookback time series) to handle multimodal input.
* As depicted in $\underline{\text{Figure 3}}$, AutoTimes feed (time series prompt | lookback time series) to enable in-context forecasting, where time series is self-prompted.
* As shown in $\underline{\text{Equation 5}}$, AutoTimes uses textual timestamps as position embeddings and adds them with the corresponding series embedding of each token.
Therefore, AutoTimes presents **prompting time series in two directions**. Horizontally, AutoTimes append the time series prompt at the front of the lookback series and regards it as the task demonstration. Vertically (series embedding + timestamp embedding), merging token-wise embeddings can use timestamps in natural language and aligns multiple variates.
[1] Goswami et al. Moment: A Family of Open Time-Series Foundation Models.
[2] Sun et al. TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series.
---
Rebuttal Comment 1.1:
Title: Confusion
Comment: I thank the authors for their thorough answers. I have to admit that I am, unfortunately, a bit confused. Would you mind double-checking your response and ensuring a consistent numbering of my questions? I asked five questions, but you referred to Q6, likely because you enumerated my outlined weaknesses as questions. Additionally, you seem to have combined Questions 1 and 2 into Q3; however, you appear to have only answered Question 2:
"Based on the mentioned works, we also provide an evaluation of AutoTimes on larger datasets (Time-Series Pile[1])."
Where did you do that? Before I attempt to decipher and remap the questions to their original form, I could be mistaken. To avoid any wrong conclusions, I kindly ask the authors to restructure it for me.
---
Reply to Comment 1.1.1:
Title: Restructed Response (Part 1)
Comment: Thank you for your thoughtful feedback and for bringing these points to our attention. We apologize for any confusion caused by the numbering and organization of our responses.
Based on the original context of the rebuttal, the response is restructured as follows:
**W1 & W2**: Suggestions for improving the presentation of the paper.
Thanks for your valuable feedback regarding the color scheme and the mentioned typo. We will use a more subdued scheme and fix the typo in the revision.
**W3**: Address inter-dependencies in real-world time series data.
We acknowledge that real-world multivariate time series often exhibit complex inter-dependencies that can significantly impact analysis. In terms of that, AutoTimes adopts Channel Independence like previous methods and further uses the timestamp as the position embedding to implicitly align different variates.
As you insightfully point out, it is necessary to explore the complex inter-dependencies, which is a hot topic in current deep time series models. It is also an essential problem for LLM4TS methods since the gap between natural language (1-D discrete token sequence) and time series (multi-dimensional continuous sequence) poses increasing challenges for LLMs to explicitly utilize the relationship between sequences.
We will explore several potential approaches: **integrating textual descriptions of variates** and **employing adaptors for variate correlating**. Your suggestion will guide us in refining our methodology.
**Q1**: Claim of missing data for foundation models.
Thanks for your mentioned works, we are excited to see recent works advancing the development of datasets and pre-trained large models in the field of time series. We will cite them in the related work section and polish your mentioned claim.
**Q2**: About the effectiveness of the proposed embedding scheme.
We appreciate your concerns about the effectiveness of our approach, especially in light of several works that use LLMs for time series without fine-tuning. The mentioned works indeed provide an out-of-the-box experience that is free from fine-tuning.
To boost the performance further, AutoTimes intends to **leverage the general token transition of LLMs while tailoring it to the specific characteristics of the dataset**, which is achieved by freezing the LLM backbone (keeping the token transition) and training new embedding layers (learning the dataset-dependent embeddings of time series).
Please also refer to the detailed code and scripts provided in the $\underline{\text{supplementary material}}$, by which we ensure all the results of the paper are reproducible.
**Q3**: Comparison with the performance of TEST[1].
We compare TEST performance-wise. The averaged results among four prediction lengths {96, 192, 336, 720} are reported. AutoTimes achieves better performance on the majority of datasets.
| Datasets (MSE\|MAE) | AutoTimes | TEST |
| ------------------- | -------------------- | -------------------- |
| ETTh1 | **0.389**\|**0.422** | 0.414\|0.431 |
| Weather | 0.235\|0.273 | **0.229**\|**0.271** |
| ECL | **0.159**\|**0.253** | 0.162\|0.254 |
| Traffic | **0.374**\|**0.264** | 0.430\|0.295 | | Summary: This paper proposes a model named AutoTimes to repurpose LLMs for time series forecasting. Different from previous methods that use flattening and linear projection to get a prediction, this model repurposes LLMs in an autoregressive way, which is closer to the pre-training process of LLMs. Specifically, the main backbone of LLMs is frozen, and new patch embedding/output layers are added like in previous works. Absolute timestamps are embedded through LLMs to serve as position embeddings. Experiments show that the proposed model achieves SOTA performance and is more efficient.
Strengths: 1. The main body of the thesis is clear and easy to understand.
2. Reproposing LLMs in an autoregressive way is intuitive and more reasonable than previous linear projection ones.
3. Numerous experiments were conducted to demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The most I concerned is whether this type of methods truly leverages the capabilities of the pre-trained LLMs. Please conduct the following ablation study: randomly initialize a large model, freeze its parameters, train it using the proposed method, and compare it with the pre-trained ones.
2. The description of multimodal in Table 1 seems to be overselling, as the proposed model only uses texts for timestamps embedding and does not have capability to leverage natural languages.
3. The in-context forecasting part is confusing: 1)What are the use cases for such a method? Table 16 shows that it's more effective to extend the lookback window. So what is the case that we can not get a longer lookback window but can get a window a long time ago? 2)Such a method can not even ensure the input series is continuous, so where does the promotion come from?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is LLM necessary for timestamp embedding? What if replacing it with ordinary nn.Embedding like Informer?
2. How does the proposed model reduce the error accumulation of autoregressive models, given that there is no specifical design for this?
3. Please report the number of learnable parameters in Table 5. Larger models have larger hidden states so that patch embedding/output layers for them have more learnable parameters. Therefore, it is uncertain whether the performance improvement comes from the scaling behavior of LLMs or from having more learnable parameters for tuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer 1PWH
Many thanks to Reviewer 1PWH for providing a detailed and insightful review.
**Q1**: Whether AutoTimes truly leverages the capabilities of the pre-trained LLMs.
We noticed that the recent work[1] has raised questions about non-autoregressive LLM4TS methods. It is also the main claim of our paper that the inconsistent model structure and generative approach will cause insufficient utilization of LLMs for forecasting. We thoroughly conduct all types of ablations[1] (**Random Init** is the suggested ablation by the reviewer):
|ETTh1 (MSE\|MAE)| AutoTimes|Random Init|w/o LLM|LLM2Attn|LLM2Trsf|
|-|-|-|-|-|-|
|Pred-96|**0.360**\|**0.400**|0.373\|0.408|0.365\|0.399|0.383\|0.404|0.377\|0.401|
|Pred-192|**0.388**\|**0.419**|0.394\|0.421|0.405\|0.425|0.414\|0.422|0.406\|0.420|
|Pred-336|**0.401**\|**0.429**|0.405\|0.430|0.429\|0.441|0.431\|0.432|0.421\|0.431|
|Pred-720|**0.406**\|**0.440**|0.418\|0.447|0.450\|0.468|0.456\|0.454|0.449\|0.452|
|ECL (MSE\|MAE)|AutoTimes|Random Init|w/o LLM|LLM2Attn|LLM2Trsf|
|-|-|-|-|-|-|
|Pred-96|**0.129**\|**0.225**|0.148\|0.245|0.171\|0.263|0.156\|0.255|0.162\|0.263|
|Pred-192|**0.147**\|**0.241**|0.163\|0.259|0.192\|0.282|0.178\|0.276|0.189\|0.287|
|Pred-336|**0.162**\|**0.258**|0.179\|0.274|0.216\|0.304|0.198\|0.295|0.216\|0.309|
|Pred-720|**0.199**\|**0.288**|0.217\|0.305|0.264\|0.342|0.230\|0.320|0.258\|0.340|
The above results highlight that the autoregression way of AutoTimes can truly utilize the LLM. The core difference: instead of regarding LLMs as representation extractors in a BERT-style, we figure out the mechanism of LLM4TS that **the general-purpose token transition is transferable among time series and natural language**, such that the generation ability of LLMs can be fully revitalized.
**Q2**: The description of multimodal in Table 1 seems to be overselling.
Thanks for your suggestion. In the initial version, we demonstrate that AutoTimes can take advantage of textual timestamps, which are the most accessible ones in real-world applications. Considering the scope of multimodal models, we will remove the point in Table 1 unless being evaluated on well-acknowledged multimodal datasets.
**Q3**: The use cases for in-context forecasting and how in-context forecasting gets the promotion.
The value of the proposed in-context forecasting is to **extend the input context of time series forecasting beyond a continuous lookback window**. As the reviewer mentioned, $\underline{\text{Table 16}}$ shows that extending the lookback window (P.2) or using trivial prompts (P.3) respectively excel at different subsets but the overall difference is small.
Since the essence of prompts is to incorporate useful domain-specific knowledge, here is one use case of in-context forecasting: Considering predicting the weather of one day, one approach is to extend the lookback length from days to weekends. However, it can also introduce noisy information since non-stationary meteorological conditions can change with seasons. Another practical way is to consider how the weather changes on the same day in the last year (or years). Although the input is not continuous, **the input context becomes more relevant based on prior knowledge** about the periodicity (yearly). Therefore, in-context forecasting makes the prior knowledge incorporatable and gets the promotion.
We also provide an exploration of prompt engineering in $\underline{\text{Q3 of Reviewer acJd}}$, in which the usage of discontinuous lookback windows can indeed outperform continuous lookback windows at well-acknowledged datasets.
**Q4**: Ablations about the timestamp embedding.
As per your suggestion, we compare the ways of embedding timestamps in AutoTimes. Here are the results:
|ETTh1 (MSE\|MAE)|LLM Embedding|nn.Embedding|w/o Embedding|
|-|-|-|-|
|Pred-96|**0.360**\|**0.400**|0.370\|0.405|0.368\|0.402|
|Pred-192|**0.388**\|**0.419**|0.396\|0.422|0.395\|0.421|
|Pred-336|**0.401**\|**0.429**|0.408\|0.430|0.413\|0.433|
|Pred-720|**0.406**\|**0.440**|0.422\|0.448|0.439\|0.459|
|ECL (MSE\|MAE)|LLM Embedding|nn.Embedding|w/o Embedding|
|-|-|-|-|
|Pred-96|**0.129**\|**0.225**|0.132\|0.231|0.131\|0.227|
|Pred-192|**0.147**\|**0.241**|0.150\|0.243|0.149\|0.243|
|Pred-336|**0.162**\|**0.258**|0.165\|0.260|0.166\|0.261|
|Pred-720|**0.199**\|**0.288**|0.203\|0.291|0.204\|0.293|
Results show that using timestamp embeddings from LLMs achieves better performance, which indicates better alignment with learned series embeddings in AutoTimes.
**Q5**: How does the method overcome error accumulation?
It is true that there is no specific design for it in AutoTimes. Actually, the performance degradation during rolling forecast not only comes from the gap between the ground truth and prediction (error accumulation) but also comes from the dropping of lookback time series (lookback cut-off).
To be more precise, AutoTimes for one-for-all scenarios mainly copes with the second issue: predicting the next token of each position to keep our LLM-based forecaster feasible on prolonged inputs, while non-autoregressive models have a fixed input length. We will rephrase the relevant statements in our paper.
**Q6**: Report the number of learnable parameters in Table 5 and confirm the scaling behavior of LLMs.
Thanks for your scientific rigor. We will include the following results in our revision, where a larger LLaMA-7B with fewer trainable parameters can still achieve better performance compared to OPT (1.3B~6.7B).
|Datasets|GPT-2|OPT-350M|OPT-1.3B|OPT-2.7B|OPT-6.7B|LLaMA-7B|
|-|-|-|-|-|-|-|
|Hidden Dim.|768|1024|2048|2560|4096|4096|
|Embedding layer|2-layer MLP|2-layer MLP|2-layer MLP|2-layer MLP|2-layer MLP|nn.Linear|
|Trainable Param. (M)|0.44|0.58|1.10|1.36|2.15|0.79|
|MSE (Avg)|0.397|0.401|0.396|0.394|0.394|0.389|
[1] Tan et al. Are Language Models Actually Useful for Time Series Forecasting?
[2] Woo et al. Unified Training of Universal Time Series Forecasting Transformers.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. The rebuttal addressed my concerns about the role of LLM and timestamp embedding, as well as the practical application of in-context forecasting. Therefore, I decided to raise my score to 7 and recommend this work to be accepted.
Moreover, I hope the authors can include these experimental results for rebuttal in the final version to make the work more comprehensive, especially for the Q6 Table regarding scaling behavior.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response and Raising the Score
Comment: Thank you for your positive feedback and for raising the score to 7. We are glad to hear that our rebuttal addressed your concerns regarding the true ultilization of LLMs in our method, effectiveness of timestamp embeddings, and the practical application of in-context forecasting.
We appreciate your suggestion to include the experimental results in the final version, particularly for the Q6 Table on scaling behavior. We will ensure that these results are incorporated to enhance the comprehensiveness of our work.
Thank you once again for your support and constructive feedback. We look forward to finalizing the manuscript. | Summary: The authors present in this paper an interesting approach where LLMs are leveraged to be fledged as time series forecasters. This proposed approach is based on freezing the LLM backbone to update a small amount of parameters to generate suitable time series embeddings which, together with time stamps as positional embeddings, provide suitable forecasts. Moreover, the authors provide an interesting notion of in-context forecasting, where the corresponding model is taught with examples how to forecast without gradient updates. Finally, the authors present numerical evaluations on diverse datasets.
Strengths: The authors provide an interesting view and current challenge in the field of time series forecasting: how to leverage LLMs for time series forecasting with minimal effort and minimal model modification. The authors provide an approach that basically freezes the LLM backbone to only update parameters related to a suitable embedding of time series, together with the usage of timestamps prompts as positional encoders. Moreover, the emphasise that the number of updated parameters is around 0.1% of those of the LLM backbone.
The authors provide an interesting paradigm that naturally arises from LLMs: how can we do forecaster fine-tuning without gradient updates. This brings in-context forecasting as a contribution of the authors, which seems to have potentially an interesting resourceful idea for the community.
Weaknesses: Perhaps the main weakness comes from the evaluations. The main points that I would invite the authors to aggressively address are the following:
- the amount of baselines used for comparison is very limited. It is completely understandable that the authors miss some baselines because the field is just moving extremely quickly. Nevertheless, the authors should at least cite a relevant fraction of these recent works, for instance :
- - Woo et al, 2024: Unified Training of Universal Time Series Forecasting Transformers
- - Ansari et al, 2024: Chronos: Learning the Language of Time Series
- - Dooly et al, 2023: ForecastPFN: Synthetically-Trained Zero-Shot Forecasting
- - Goswami et al 2024: Moment: A family of open time-series foundation models
- The amount of datasets used is limited. It is understandable that the authors provide a limited amount of datasets for evaluations, as one of the main hindrances in the field is the lack of publicly available data (at least in comparison to other fields). Yet, one can see that other contributions have shared datasets and made them freely available. See for instance:
- - Woo et al, 2024: Unified Training of Universal Time Series Forecasting Transformers
- - - and the corresponding HF source: https://huggingface.co/datasets/Salesforce/lotsa_data
- Evaluations of in-context forecasting are limited. Following the previous points, I think this is one of the most exciting points of the paper and the reader would appreciate to see more systematic evaluations and explorations of this notion. Currently there is only one evaluation which involves M3 and M4 datasets, perhaps following the approach presented in One-Fits-All paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In Section 3 the authors mention that they aim to forecast covariates as well. Is this correct? In general, one can assume that certain kind of covariates are available in the future, like timestamps or boolean variables indicating that something specific will happen in the future. But in general, I am not sure that the authors truly want to forecast covariates.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have devoted a section describing sensible limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to Reviewer acJd
Many thanks to Reviewer acJd for providing a detailed review and recognizing our contributions.
**Q1**: More baseline models for evaluation.
We acknowledge the importance of the recent works and will certainly incorporate the suggested references in our revision. As per your suggestion, we include the mentioned models based on the official code to enlarge our time series forecasting baselines. Here are the results:
|ETTh1 (MSE)|AutoTimes|MOIRIA|MOMENT|Chronos|
|-|-|-|-|-|
|Pred-96|**0.360**|0.384|0.387|0.571|
|Pred-192|**0.388**|0.425|0.410|0.654|
|Pred-336|**0.401**|0.456|0.422|0.712|
|Pred-720|**0.406**|0.470|0.454|0.774|
|ECL (MSE)|AutoTimes|MOIRIA|MOMENT|Chronos|
|-|-|-|-|-|
|Pred-96|**0.129**|0.158|0.136|-|
|Pred-192|**0.147**|0.174|0.152|-|
|Pred-336|**0.162**|0.191|0.167|-|
|Pred-720|**0.199**|0.229|0.205|-|
|Traffic (MSE)|AutoTimes|MOIRIA|MOMENT|Chronos|
|-|-|-|-|-|
|Pred-96|**0.343**|-|0.391|0.770|
|Pred-192|**0.362**|-|0.404|OOM|
|Pred-336|**0.379**|-|0.414|OOM|
|Pred-720|**0.413**|-|0.450|OOM|
As shown above, MOIRIA and Chronos follow the paradigm of **pre-training -> zero-shot forecasting** (- indicates that the test set is included in the pre-training dataset and thus not reported). MOMENT follows **pre-training -> fine-tuning on each dataset and length**. AutoTimes does not involve pre-training on time series since it adopts the pre-trained LLM and **fine-tunes it on each dataset, using one model for all prediction lengths**.
In terms of performance, AutoTimes consistently achieves the best. Still, we also appreciate the zero-shot forecasting ability of natively trained large time series models, which provide an out-of-the-box experience that is free from training/tuning the models.
**Q2**: More benchmark datasets for evaluation.
We appreciate your suggested works that have contributed valuable data resources. Thus, we conduct evaluations on several datasets from [1], which come from various domains and applications.
|Australian Electricity Demand (MSE)|AutoTimes|PatchTST|iTransformer|DLinear|
|-|-|-|-|-|
|Pred-96|**0.150**|0.163|0.153|0.167|
|Pred-192|**0.203**|0.216|0.214|0.211|
|Pred-336|**0.236**|0.255|0.244|0.237|
|Pred-720|**0.264**|0.289|0.267|0.269|
|Bdg-2 Panther (MSE)|AutoTimes|PatchTST|iTransformer|DLinear|
|-|-|-|-|-|
|Pred-96|**0.537**|0.565|0.546|0.581|
|Pred-192|**0.663**|0.707|0.694|0.693|
|Pred-336|**0.741**|0.807|0.774|0.781|
|Pred-720|**0.802**|0.911|0.832|0.829|
|Oikolab Weather (MSE)|AutoTimes|PatchTST|iTransformer|DLinear|
|-|-|-|-|-|
|Pred-96|**0.603**|0.635|0.630|0.663|
|Pred-192|**0.643**|0.678|0.660|0.694|
|Pred-336|**0.666**|0.685|0.677|0.711|
|Pred-720|**0.697**|0.710|0.698|0.727|
The above results show that AutoTimes still outperforms the state-of-the-art deep models, which are good enhancements for the robustness of experiments. We will include these in the revision and make more complete evaluations of the lotsa dataset [1].
**Q3**: Systematic evaluations on the proposed in-context forecasting.
Thanks a lot for your scientific rigor. We adopt M3 and M4 datasets, which are consistent with the zero-shot experiment of One-Fits-All paper, to present the promotion of our in-context paradigm. As per your request, we extend the evaluation to widely recognized datasets. Details of the experiment are as follows:
By using a trained model checkpoint on a source domain (Traffic), we conduct forecasting without gradient update on target ETT datasets. We evaluate the Pred-96 performance on the last variate (OT).
* For the zero-shot scenario, the input is Length-288 lookback series.
* For in-context forecasting, the input is (Length-384 series prompt + Length-288 lookback series). Considering the dataset periodicity, the prompt is uniformly selected as the Ahead-24 series of the original lookback series.
* To eliminate the performance boost that comes from extending the input length, we also provide the results of Length-672 lookback series in the zero-shot scenario.
|Dataset (MSE)|In-Context (Prompt-384 + Input-288)|Zero-Shot (Input-288)|Zero-Shot (Input-672)|
|-|-|-|-|
|ETTh1-OT|**0.0645**|0.0673|0.0657|
|ETTh2-OT|**0.1513**|0.1637|0.1538|
|ETTm1-OT|**0.0399**|0.0424|0.0415|
|ETTm2-OT|**0.1629**|0.1669|0.1701|
Moreover, we further delve into the effect of different strategies to select time series prompts:
|Dataset (MSE)|Ahead-Period|Ahead-Random|Fixed Prompt|Other-Variates|Baseline (Zero-Shot)|
|-|-|-|-|-|-|
|ETTh1-OT|**0.0645**|0.0666|0.0769|0.1263|0.0657|
|ETTh2-OT|**0.1513**|0.1621|0.1859|0.1780|0.1538|
|ETTm1-OT|**0.0399**|0.0407|0.0512|0.0852|0.0415|
|ETTm2-OT|**0.1629**|0.1719|0.2104|0.2297|0.1701|
* **Ahead-Period**: The prompt is uniformly selected as the Ahead-24 series of the original lookback series where 24 is one of the periods of ETT.
* **Ahead-Random**: The prompt is randomly selected as the previous series of the original lookback series.
* **Fixed Prompt**: The prompt is fixed as the first piece of the series in the variate-OT.
* **Other Variate**: The prompt is uniformly selected as the Ahead-24 series, but comes from other variate of ETT.
The above results demonstrate the effectiveness of using suitable time series prompts and highlight the influence of prompt engineering. Using in-period time series prompts can even outperform extending the lookback window. We also provide a detailed explanation in $\underline{\text{Q3 of Reviewer 1PWH}}$. Thus, the in-context forecasting paradigm is **meaningful for the real-world application**.
**Q4**: Whether we aim to forecast covariates or not?
In Section 3, we use the timestamps as the covariate to improve forecasting, but we do not predict the covariate. As shown in $\underline{\text{Equation 1 of the main text}}$, we are concerned about the multivariate scenario, where each time series needs to be predicted.
[1] Woo et al. Unified Training of Universal Time Series Forecasting Transformers.
---
Rebuttal Comment 1.1:
Title: Complete Evaluation Results
Comment: Dear Reviewer acJd:
We sincerely appreciate your insightful pre-rebuttal review, which has inspired us to improve our paper further substantially.
According to your suggestions, we have made every effort to complete the evaluations, including more baseline models, benchmark datasets, and evaluations/explorations on in-context forecasting. Experimentally, we verify that our method can achieve the best performance on new baselines and benchmarks, an in-context forecasting paradigm deserves a meaningful notion for real-world forecasting.
**Due to the word limit of the rebuttal, we provide the complete results of the previous questions here**:
**1. More baseline models for evaluation.**
| ETTh1 (MSE\|MAE) | AutoTimes | MOIRIA | MOMENT | Chronos |
| - | - | - | - | - |
| Pred-96 | **0.360**\|**0.400** | 0.384\|0.402 | 0.387\|0.410 | 0.571\|0.464 |
| Pred-192 | **0.388**\|**0.419** | 0.425\|0.429 | 0.410\|0.426 | 0.654\|0.504 |
| Pred-336 | **0.401**\|**0.429** | 0.456\|0.450 | 0.422\|0.437 | 0.712\|0.530 |
| Pred-720 | **0.406**\| **0.440** | 0.470\|0.473 | 0.454\|0.472 | 0.774\|0.570 |
| Average | **0.389**\|**0.422** | 0.434\|0.439 | 0.418\|0.436 | 0.678\|0.517 |
| ECL (MSE\|MAE) | AutoTimes | MOIRIA | MOMENT | Chronos |
| - | - | - | - | - |
| Pred-96 | **0.129**\|**0.225** | 0.158\|0.248 | 0.136\|0.233 | - |
| Pred-192 | **0.147**\|**0.241** | 0.174\|0.263 | 0.152\|0.247 | - |
| Pred-336 | **0.162**\|**0.258** | 0.191\|0.278 | 0.167\|0.264 | - |
| Pred-720 | **0.199**\|**0.288** | 0.229\|0.307 | 0.205\|0.295 | - |
| Average | **0.159**\|**0.253** | 0.188\|0.274 | 0.165\|0.260 | - |
| Traffic (MSE\|MAE) | AutoTimes | MOIRIA | MOMENT | Chronos |
| - | - | - | - | - |
| Pred-96 | **0.343**\| **0.248** | - | 0.391\|0.282 | 0.770\|0.552 |
| Pred-192 | **0.362**\|**0.257** | - | 0.404\|0.287 | OOM |
| Pred-336 | **0.379**\| **0.266** | - | 0.414\|0.292 | OOM |
| Pred-720 | **0.413**\| **0.284** | - | 0.450\|0.310 | OOM |
| Average | **0.374**\|**0.264** | - | 0.415 \|0.293 | OOM |
**2. More benchmark datasets for evaluation.**
| Australian Electricity Demand (MSE\|MAE) | AutoTimes | PatchTST | iTransformer | DLinear |
| - | - | - | - | - |
| Pred-96 | **0.150**\|**0.228** | 0.163\|0.242 | 0.153\|0.233 | 0.167\|0.250 |
| Pred-192 | **0.203**\|**0.268** | 0.216\|0.284 | 0.214\|0.270 | 0.211\|0.283 |
| Pred-336 | **0.236**\|**0.293** | 0.255\|0.312 | 0.244\|0.295 | 0.237\|0.302 |
| Pred-720 | **0.264**\|**0.315** | 0.289\|0.343 | 0.267\|0.318 | 0.269\|0.332 |
| Average | **0.213**\|**0.276** | 0.231\|0.295 | 0.220\|0.279 | 0.221\|0.292 |
| Bdg-2 Panther (MSE\|MAE) | AutoTimes | PatchTST | iTransformer | DLinear |
| - | - | - | - | - |
| Pred-96 | **0.537**\|**0.458** | 0.565\|0.476 | 0.546\|0.462 | 0.581\|0.499 |
| Pred-192 | **0.663**\|**0.511** | 0.707\|0.543 | 0.694\|0.524 | 0.693\|0.547 |
| Pred-336 | **0.741**\|**0.544** | 0.807\|0.584 | 0.774\|0.564 | 0.781\|0.584 |
| Pred-720 | **0.802**\|**0.575** | 0.911\|0.649 | 0.832\|0.597 | 0.829\|0.615 |
| Average | **0.686**\|**0.522** | 0.748\|0.563 | 0.712\|0.537 | 0.721\|0.561 |
| Oikolab Weather (MSE\|MAE) | AutoTimes | PatchTST | iTransformer | DLinear |
| - | - | - | - | - |
| Pred-96 | **0.603**\|**0.577** | 0.635\|0.603 | 0.630\|0.591 | 0.663\|0.611 |
| Pred-192 | **0.643**\|**0.602** | 0.678\|0.630 | 0.660\|0.609 | 0.694\|0.633 |
| Pred-336 | **0.666**\|**0.615** | 0.685\|0.634 | 0.677\|0.620 | 0.711\|0.643 |
| Pred-720 | **0.697**\|**0.632** | 0.710\|0.647 | 0.698\|0.633 | 0.727\|0.654 |
| Average | **0.652**\|**0.607** | 0.677\|0.629 | 0.667\|0.613 | 0.699\|0.635 |
**3. Systematic evaluations of in-context forecasting.**
| Dataset (MSE\|MAE) | In-Context(Prompt-384 + Input-288) | Zero-Shot (Input-288) | Zero-Shot(Input-672) |
| - | - | - | - |
| ETTh1-OT | **0.0645**\|**0.1951** | 0.0673\|0.1996 | 0.0657 \|0.1969 |
| ETTh2-OT | **0.1513**\|**0.3009** | 0.1637\|0.3133 | 0.1538 \|0.3026 |
| ETTm1-OT | **0.0399**\|**0.1512** | 0.0424\|0.1567 | 0.0415 \|0.1534 |
| ETTm2-OT | **0.1629**\|**0.3143** | 0.1669\|0.3137 | 0.1701 \|0.3197 |
| Dataset (MSE\|MAE) | Ahead-Period | Ahead-Random | Fixed Prompt | Other-Variates | Baseline (Zero-Shot) |
| - | - | - | - | - | - |
| ETTh1-OT | **0.0645**\|**0.1951** | 0.0666\|0.1988 | 0.0769\|0.2109 | 0.1263\|0.2796 | 0.0657 \|0.1969 |
| ETTh2-OT | **0.1513**\|**0.3009** | 0.1621\|0.3141 | 0.1859\|0.3346 | 0.1780\|0.3338 | 0.1538 \|0.3026 |
| ETTm1-OT | **0.0399**\|**0.1512** | 0.0407\|0.1529 | 0.0512\|0.1733 | 0.0852\|0.2284 | 0.0415 \|0.1534 |
| ETTm2-OT | **0.1629**\|**0.3143** | 0.1719\|0.3216 | 0.2104\|0.3649 | 0.2297\|0.3738 | 0.1701 \|0.3197 |
Given the limited timeframe for author-reviewer discussion, please kindly let us know if our response has addressed your concerns. Your feedback is invaluable in helping us improve the communication. We'd be very happy to answer any further questions.
All the best,
Authors
---
Rebuttal 2:
Title: Request of Reviewer’s Attention and Feedback
Comment: Thank you for your feedback, but we respectively disagree that the discussion would correspond to almost a major revision of the paper:
**1. The original baselines compared in our paper are the most up-to-date and advanced deep-learning approaches**.
* For LLM4TS methods, we compared AutoTimes with the state-of-the-art models: TimeLLM (ICLR 2024, cite 157), and FPT (NeurIPS 2023 Spotlight, cite 147).
* For deep time series forecasters, we compared with the most prevalent ones, covering various architectures with state-of-the-art performance: iTransformer (ICLR 2024 Spotlight, cite 189), DLinear (AAAI Oral, cite 978), PatchTST (ICLR 2023, cite 615), TimesNet (ICLR 2023, cite 466).
**2. Our method is evaluated on diverse tasks and extensively analyzed in the original submission**.
* Our evaluations include long-term time series forecasting, short-term time series forecasting, zero-shot time series forecasting, and in-context time series forecasting, covering 10 datasets beyond those in previous LLM4TS methods.
* Our analysis covers method generality, scaling behavior of LLMs, method efficiency, and hyperparameter sensitivity, which are hardly explored in previous LLM4TS approaches.
* Our ablations cover almost all components in the proposed method: textual timestamp embedding (Appendix D.5), the LLM backbone ( LoRA Adaptation in Section 4.4), and autoregressive forecasting (Figure 7 and Table 9).
Even if it is indispensable to include the suggested evaluations (datasets of LoTSA, comparison with concurrent Large Time Series Models, and exploration of our proposed in-context forecasting) that have not been included in any previous works in the LLM4TS direction, **these experiments can be easily added by lines in existing tables and incorporate them in the Appendix, such that the work do not need a major revision**.
We believe that such adjustments can enhance the integrity of the paper without too much impact on the overall structure. We hope you will reconsider this point.
Thank you for your understanding and support. | Rebuttal 1:
Rebuttal: ## Summary of Rebuttal
We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further.
In this work, we proposed an effective approach (AutoTimes) to repurpose LLMs as **autoregressive forecasters**. Unlike previous works that adopt LLMs as non-autoregressive models, we maintain **consistency with the training and inference of LLMs**. AutoTimes exhibits **variable-length feasibility**, **scalability** with larger LLMs, and **utilization of textual timestamps**, achieving **state-of-the-art performance** with minimally trainable parameters. Further, **we propose in-context forecasting for the first time**, extending the conventional forecasting context to discontinuous time series windows as task demonstrations.
The reviewers generally held positive opinions of our paper, in that the proposed method is "**well-motivated**", "**intuitive**", "**novel**", and "**more reasonable than previous ones**", the paper is "**well-written**" and "**easy to follow**", in-context forecasting is "**resourceful idea for the community**", "**one-for-all benchmark is innovative**", "**numerous experiments were conducted**" and "**achieve continuous improvement over previous SOTA methods**".
The reviewers also raised insightful and constructive concerns. We made every effort to address all the concerns by providing sufficient evidence and requested results. Here is the summary of the major revisions:
* **More evaluations (Reviewer acJd, EAH7)**: We extensively include the mentioned baseline models and datasets. By making great efforts to complete the evaluations, we verify that our method still achieves the best performance and good generality on new baselines and benchmarks.
* **Systematic exploration of in-context forecasting (Reviewer acJd, 1PWH)**: We delve into in-context forecasting, including more evaluated datasets and different strategies to retrieve time series prompts. It highlights the significance of incorporating prior knowledge (such as periodicity) into the prompt engineering of time series.
* **Ablation study (Reviewer 1PWH, k2uw**): We conduct comprehensive ablations to confirm that AutoTimes truly utilize the ability of LLMs, and highlight the improvement and significance of autoregression. We also provide ablation on alternative embeddings of timestamps and confirm the scaling behavior of our methodology.
* **Technical contributions (Reviewer k2uw)**: We highlight the contribution of introducing autoregression into LLM4TS first, which facilitates the full abilities and efficiency of LLMs. By analyzing autoregressive and non-autoregressive approaches, we illustrate the advantages in both theoretical and experimental aspects.
* **Polished writings (Reviewer EAH7)**: We summarize the revisions and future directions with helpful suggestions from the reviewers. The peculiarities and limitations of our work are clarified more clearly.
The valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. We'd be very happy to answer any further questions.
Looking forward to the reviewer's feedback. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.